Feb 23 12:59:01.547080 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 12:59:02.168526 master-0 kubenswrapper[4202]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:59:02.170618 master-0 kubenswrapper[4202]: I0223 12:59:02.169463 4202 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.181921 4202 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182013 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182025 4202 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182034 4202 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182043 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182053 4202 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182061 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182071 4202 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182080 4202 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182090 4202 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:59:02.182067 master-0 kubenswrapper[4202]: W0223 12:59:02.182100 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182110 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182119 4202 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182128 4202 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182137 4202 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182145 4202 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182153 4202 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182164 4202 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182177 4202 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182186 4202 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182195 4202 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182203 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182213 4202 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182221 4202 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182229 4202 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182242 4202 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182253 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182261 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182269 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:59:02.182725 master-0 kubenswrapper[4202]: W0223 12:59:02.182278 4202 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182286 4202 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182294 4202 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182301 4202 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182330 4202 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182398 4202 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182411 4202 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182422 4202 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182431 4202 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182440 4202 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182450 4202 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182458 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182468 4202 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182477 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182484 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182495 4202 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182502 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182511 4202 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182519 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182526 4202 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:59:02.183660 master-0 kubenswrapper[4202]: W0223 12:59:02.182533 4202 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182541 4202 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182549 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182556 4202 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182564 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182572 4202 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182580 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182587 4202 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182596 4202 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182605 4202 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182614 4202 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182626 4202 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182637 4202 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182647 4202 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182674 4202 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182684 4202 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182694 4202 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182704 4202 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182715 4202 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182725 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:59:02.184751 master-0 kubenswrapper[4202]: W0223 12:59:02.182737 4202 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: W0223 12:59:02.182746 4202 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: W0223 12:59:02.182755 4202 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184060 4202 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184088 4202 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184106 4202 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184120 4202 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184163 4202 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184174 4202 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184186 4202 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184198 4202 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184207 4202 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184217 4202 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184228 4202 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184239 4202 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184248 4202 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184258 4202 flags.go:64] FLAG: --cgroup-root="" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184268 4202 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184277 4202 flags.go:64] FLAG: --client-ca-file="" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184286 4202 flags.go:64] FLAG: --cloud-config="" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184297 4202 flags.go:64] FLAG: --cloud-provider="" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184307 4202 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184321 4202 flags.go:64] FLAG: --cluster-domain="" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184332 4202 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 12:59:02.185998 master-0 kubenswrapper[4202]: I0223 12:59:02.184375 4202 flags.go:64] FLAG: --config-dir="" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184399 4202 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184409 4202 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184421 4202 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184430 4202 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184440 4202 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184450 4202 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184460 4202 flags.go:64] FLAG: --contention-profiling="false" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184469 4202 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184478 4202 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184488 4202 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184497 4202 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184508 4202 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184518 4202 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184527 4202 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184536 4202 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184545 4202 flags.go:64] FLAG: --enable-server="true" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184554 4202 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184578 4202 flags.go:64] FLAG: --event-burst="100" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184588 4202 flags.go:64] FLAG: --event-qps="50" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184597 4202 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184607 4202 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184616 4202 flags.go:64] FLAG: --eviction-hard="" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184627 4202 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 12:59:02.187219 master-0 kubenswrapper[4202]: I0223 12:59:02.184637 4202 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184659 4202 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184669 4202 flags.go:64] FLAG: --eviction-soft="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184678 4202 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184687 4202 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184696 4202 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184706 4202 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184716 4202 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184725 4202 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184734 4202 flags.go:64] FLAG: --feature-gates="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184745 4202 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184755 4202 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184764 4202 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184774 4202 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184784 4202 flags.go:64] FLAG: --healthz-port="10248" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184794 4202 flags.go:64] FLAG: --help="false" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184803 4202 flags.go:64] FLAG: --hostname-override="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184813 4202 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184823 4202 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184833 4202 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184842 4202 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184851 4202 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184860 4202 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184869 4202 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184878 4202 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 12:59:02.188728 master-0 kubenswrapper[4202]: I0223 12:59:02.184919 4202 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184929 4202 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184939 4202 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184949 4202 flags.go:64] FLAG: --kube-reserved="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184958 4202 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184967 4202 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184977 4202 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184986 4202 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.184994 4202 flags.go:64] FLAG: --lock-file="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185004 4202 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185026 4202 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185036 4202 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185054 4202 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185064 4202 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185074 4202 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185084 4202 flags.go:64] FLAG: --logging-format="text" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185093 4202 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185103 4202 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185112 4202 flags.go:64] FLAG: --manifest-url="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185121 4202 flags.go:64] FLAG: --manifest-url-header="" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185133 4202 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185143 4202 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185154 4202 flags.go:64] FLAG: --max-pods="110" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185163 4202 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185173 4202 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 12:59:02.190222 master-0 kubenswrapper[4202]: I0223 12:59:02.185182 4202 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185192 4202 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185200 4202 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185210 4202 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185219 4202 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185240 4202 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185249 4202 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185259 4202 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185268 4202 flags.go:64] FLAG: --pod-cidr="" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185277 4202 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185290 4202 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185299 4202 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185309 4202 flags.go:64] FLAG: --pods-per-core="0" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185322 4202 flags.go:64] FLAG: --port="10250" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185334 4202 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185375 4202 flags.go:64] FLAG: --provider-id="" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185384 4202 flags.go:64] FLAG: --qos-reserved="" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185394 4202 flags.go:64] FLAG: --read-only-port="10255" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185403 4202 flags.go:64] FLAG: --register-node="true" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185412 4202 flags.go:64] FLAG: --register-schedulable="true" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185422 4202 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185437 4202 flags.go:64] FLAG: --registry-burst="10" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185447 4202 flags.go:64] FLAG: --registry-qps="5" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185457 4202 flags.go:64] FLAG: --reserved-cpus="" Feb 23 12:59:02.191833 master-0 kubenswrapper[4202]: I0223 12:59:02.185466 4202 flags.go:64] FLAG: --reserved-memory="" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185478 4202 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185487 4202 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185496 4202 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185505 4202 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185514 4202 flags.go:64] FLAG: --runonce="false" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185524 4202 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185533 4202 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185543 4202 flags.go:64] FLAG: --seccomp-default="false" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185552 4202 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185561 4202 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185571 4202 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185581 4202 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185591 4202 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185601 4202 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185610 4202 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185620 4202 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185629 4202 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185639 4202 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185648 4202 flags.go:64] FLAG: --system-cgroups="" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185657 4202 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185678 4202 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185687 4202 flags.go:64] FLAG: --tls-cert-file="" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185696 4202 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185716 4202 flags.go:64] FLAG: --tls-min-version="" Feb 23 12:59:02.193125 master-0 kubenswrapper[4202]: I0223 12:59:02.185727 4202 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185736 4202 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185746 4202 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185755 4202 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185764 4202 flags.go:64] FLAG: --v="2" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185779 4202 flags.go:64] FLAG: --version="false" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185790 4202 flags.go:64] FLAG: --vmodule="" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185802 4202 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: I0223 12:59:02.185812 4202 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186064 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186080 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186089 4202 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186099 4202 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186108 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186117 4202 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186125 4202 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186133 4202 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186141 4202 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186149 4202 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186158 4202 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186165 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186173 4202 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:59:02.194493 master-0 kubenswrapper[4202]: W0223 12:59:02.186181 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186193 4202 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186202 4202 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186212 4202 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186220 4202 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186228 4202 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186242 4202 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186252 4202 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186261 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186270 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186281 4202 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186289 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186297 4202 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186306 4202 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186320 4202 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186333 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186378 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186390 4202 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:59:02.195580 master-0 kubenswrapper[4202]: W0223 12:59:02.186403 4202 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186412 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186421 4202 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186429 4202 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186437 4202 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186445 4202 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186457 4202 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186466 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186474 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186482 4202 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186490 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186498 4202 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186506 4202 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186515 4202 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186523 4202 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186531 4202 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186539 4202 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186547 4202 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186555 4202 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186563 4202 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:59:02.196617 master-0 kubenswrapper[4202]: W0223 12:59:02.186575 4202 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186583 4202 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186591 4202 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186599 4202 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186608 4202 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186617 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186624 4202 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186634 4202 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186644 4202 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186654 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186664 4202 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186674 4202 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186684 4202 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186693 4202 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186703 4202 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186712 4202 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186720 4202 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186728 4202 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186735 4202 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186743 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:59:02.198288 master-0 kubenswrapper[4202]: W0223 12:59:02.186751 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:59:02.199401 master-0 kubenswrapper[4202]: I0223 12:59:02.186778 4202 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:59:02.200112 master-0 kubenswrapper[4202]: I0223 12:59:02.200019 4202 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 12:59:02.200112 master-0 kubenswrapper[4202]: I0223 12:59:02.200089 4202 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200191 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200204 4202 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200210 4202 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200217 4202 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200224 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200230 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:59:02.200224 master-0 kubenswrapper[4202]: W0223 12:59:02.200237 4202 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200244 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200251 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200257 4202 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200262 4202 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200268 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200273 4202 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200279 4202 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200285 4202 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200290 4202 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200295 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200301 4202 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200307 4202 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200315 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200323 4202 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200333 4202 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200362 4202 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200370 4202 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200377 4202 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:59:02.200693 master-0 kubenswrapper[4202]: W0223 12:59:02.200383 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200389 4202 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200395 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200402 4202 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200410 4202 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200416 4202 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200422 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200427 4202 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200432 4202 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200438 4202 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200444 4202 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200449 4202 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200456 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200463 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200470 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200477 4202 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200484 4202 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200491 4202 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200497 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200537 4202 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:59:02.201826 master-0 kubenswrapper[4202]: W0223 12:59:02.200546 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200552 4202 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200562 4202 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200569 4202 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200576 4202 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200582 4202 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200587 4202 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200593 4202 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200598 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200604 4202 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200609 4202 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200615 4202 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200620 4202 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200625 4202 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200631 4202 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200636 4202 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200642 4202 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200647 4202 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200654 4202 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:59:02.203317 master-0 kubenswrapper[4202]: W0223 12:59:02.200661 4202 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200668 4202 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200674 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200679 4202 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200684 4202 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200692 4202 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200699 4202 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.200706 4202 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: I0223 12:59:02.200715 4202 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201031 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201105 4202 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201117 4202 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201127 4202 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201139 4202 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201147 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:59:02.204290 master-0 kubenswrapper[4202]: W0223 12:59:02.201155 4202 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201165 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201174 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201182 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201191 4202 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201198 4202 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201207 4202 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201215 4202 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201225 4202 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201233 4202 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201241 4202 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201249 4202 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201258 4202 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201266 4202 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201275 4202 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201283 4202 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201291 4202 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201299 4202 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201307 4202 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201314 4202 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:59:02.205154 master-0 kubenswrapper[4202]: W0223 12:59:02.201322 4202 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201331 4202 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201398 4202 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201408 4202 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201415 4202 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201423 4202 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201431 4202 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201439 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201447 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201456 4202 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201472 4202 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201482 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201492 4202 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201502 4202 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201515 4202 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201528 4202 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201537 4202 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201548 4202 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201559 4202 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:59:02.206552 master-0 kubenswrapper[4202]: W0223 12:59:02.201570 4202 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201578 4202 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201588 4202 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201596 4202 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201607 4202 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201615 4202 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201623 4202 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201630 4202 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201639 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201646 4202 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201655 4202 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201663 4202 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201670 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201679 4202 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201687 4202 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201698 4202 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201708 4202 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201716 4202 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201724 4202 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201733 4202 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:59:02.207723 master-0 kubenswrapper[4202]: W0223 12:59:02.201742 4202 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201750 4202 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201761 4202 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201771 4202 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201780 4202 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201789 4202 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: W0223 12:59:02.201799 4202 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: I0223 12:59:02.201816 4202 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: I0223 12:59:02.202244 4202 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 12:59:02.208791 master-0 kubenswrapper[4202]: I0223 12:59:02.206828 4202 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 23 12:59:02.209543 master-0 kubenswrapper[4202]: I0223 12:59:02.209014 4202 server.go:997] "Starting client certificate rotation" Feb 23 12:59:02.209543 master-0 kubenswrapper[4202]: I0223 12:59:02.209052 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 12:59:02.209543 master-0 kubenswrapper[4202]: I0223 12:59:02.209279 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:59:02.240575 master-0 kubenswrapper[4202]: I0223 12:59:02.240429 4202 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 12:59:02.246199 master-0 kubenswrapper[4202]: E0223 12:59:02.246106 4202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:02.246771 master-0 kubenswrapper[4202]: I0223 12:59:02.246685 4202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 12:59:02.271789 master-0 kubenswrapper[4202]: I0223 12:59:02.271718 4202 log.go:25] "Validated CRI v1 runtime API" Feb 23 12:59:02.278073 master-0 kubenswrapper[4202]: I0223 12:59:02.278011 4202 log.go:25] "Validated CRI v1 image API" Feb 23 12:59:02.281377 master-0 kubenswrapper[4202]: I0223 12:59:02.281312 4202 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 12:59:02.287475 master-0 kubenswrapper[4202]: I0223 12:59:02.287416 4202 fs.go:135] Filesystem UUIDs: map[2d6160db-474a-49c3-9ea7-0693d391532e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 23 12:59:02.287475 master-0 kubenswrapper[4202]: I0223 12:59:02.287461 4202 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 23 12:59:02.311056 master-0 kubenswrapper[4202]: I0223 12:59:02.310709 4202 manager.go:217] Machine: {Timestamp:2026-02-23 12:59:02.308478755 +0000 UTC m=+0.597340393 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:813062fc9ff74d30ae5cd2159d83a791 SystemUUID:813062fc-9ff7-4d30-ae5c-d2159d83a791 BootID:4abb3f7a-5d3d-42f2-a9ae-25fe202cc7d3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:69:55:75 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3c:f3:9f Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:de:1e:e7:99:8c:28 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 12:59:02.311056 master-0 kubenswrapper[4202]: I0223 12:59:02.310972 4202 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 12:59:02.311292 master-0 kubenswrapper[4202]: I0223 12:59:02.311182 4202 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 12:59:02.312755 master-0 kubenswrapper[4202]: I0223 12:59:02.312715 4202 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 12:59:02.313044 master-0 kubenswrapper[4202]: I0223 12:59:02.312991 4202 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 12:59:02.313304 master-0 kubenswrapper[4202]: I0223 12:59:02.313033 4202 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 12:59:02.313382 master-0 kubenswrapper[4202]: I0223 12:59:02.313325 4202 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 12:59:02.313382 master-0 kubenswrapper[4202]: I0223 12:59:02.313380 4202 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 12:59:02.313961 master-0 kubenswrapper[4202]: I0223 12:59:02.313924 4202 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 12:59:02.313961 master-0 kubenswrapper[4202]: I0223 12:59:02.313962 4202 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 12:59:02.314135 master-0 kubenswrapper[4202]: I0223 12:59:02.314105 4202 state_mem.go:36] "Initialized new in-memory state store" Feb 23 12:59:02.314233 master-0 kubenswrapper[4202]: I0223 12:59:02.314205 4202 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 12:59:02.320321 master-0 kubenswrapper[4202]: I0223 12:59:02.320266 4202 kubelet.go:418] "Attempting to sync node with API server" Feb 23 12:59:02.320321 master-0 kubenswrapper[4202]: I0223 12:59:02.320296 4202 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 12:59:02.320321 master-0 kubenswrapper[4202]: I0223 12:59:02.320327 4202 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 12:59:02.320321 master-0 kubenswrapper[4202]: I0223 12:59:02.320357 4202 kubelet.go:324] "Adding apiserver pod source" Feb 23 12:59:02.320660 master-0 kubenswrapper[4202]: I0223 12:59:02.320373 4202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 12:59:02.325614 master-0 kubenswrapper[4202]: I0223 12:59:02.325564 4202 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 12:59:02.327195 master-0 kubenswrapper[4202]: W0223 12:59:02.327060 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:02.327293 master-0 kubenswrapper[4202]: E0223 12:59:02.327220 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:02.327535 master-0 kubenswrapper[4202]: W0223 12:59:02.327330 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:02.327621 master-0 kubenswrapper[4202]: E0223 12:59:02.327576 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:02.328764 master-0 kubenswrapper[4202]: I0223 12:59:02.328710 4202 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 12:59:02.328988 master-0 kubenswrapper[4202]: I0223 12:59:02.328943 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 12:59:02.328988 master-0 kubenswrapper[4202]: I0223 12:59:02.328975 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 12:59:02.328988 master-0 kubenswrapper[4202]: I0223 12:59:02.328984 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 12:59:02.328988 master-0 kubenswrapper[4202]: I0223 12:59:02.328993 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329004 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329014 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329024 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329033 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329042 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329050 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 12:59:02.329222 master-0 kubenswrapper[4202]: I0223 12:59:02.329082 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 12:59:02.329679 master-0 kubenswrapper[4202]: I0223 12:59:02.329631 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 12:59:02.332567 master-0 kubenswrapper[4202]: I0223 12:59:02.332524 4202 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 12:59:02.333377 master-0 kubenswrapper[4202]: I0223 12:59:02.333314 4202 server.go:1280] "Started kubelet" Feb 23 12:59:02.334845 master-0 kubenswrapper[4202]: I0223 12:59:02.334647 4202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 12:59:02.335117 master-0 kubenswrapper[4202]: I0223 12:59:02.334609 4202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 12:59:02.335117 master-0 kubenswrapper[4202]: I0223 12:59:02.334985 4202 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 12:59:02.335773 master-0 kubenswrapper[4202]: I0223 12:59:02.335741 4202 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 12:59:02.335851 master-0 kubenswrapper[4202]: I0223 12:59:02.335735 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:02.335797 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 12:59:02.345252 master-0 kubenswrapper[4202]: I0223 12:59:02.345174 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 12:59:02.345252 master-0 kubenswrapper[4202]: I0223 12:59:02.345249 4202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 12:59:02.346025 master-0 kubenswrapper[4202]: I0223 12:59:02.345947 4202 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 12:59:02.346025 master-0 kubenswrapper[4202]: I0223 12:59:02.346003 4202 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 12:59:02.346404 master-0 kubenswrapper[4202]: I0223 12:59:02.346363 4202 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 12:59:02.346562 master-0 kubenswrapper[4202]: I0223 12:59:02.346513 4202 reconstruct.go:97] "Volume reconstruction finished" Feb 23 12:59:02.346562 master-0 kubenswrapper[4202]: I0223 12:59:02.346554 4202 reconciler.go:26] "Reconciler: start to sync state" Feb 23 12:59:02.347627 master-0 kubenswrapper[4202]: E0223 12:59:02.346383 4202 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:02.347627 master-0 kubenswrapper[4202]: E0223 12:59:02.347401 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 23 12:59:02.349443 master-0 kubenswrapper[4202]: W0223 12:59:02.349212 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:02.349518 master-0 kubenswrapper[4202]: E0223 12:59:02.349471 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:02.350190 master-0 kubenswrapper[4202]: E0223 12:59:02.347417 4202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1896e1970f80a868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,LastTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:02.352134 master-0 kubenswrapper[4202]: I0223 12:59:02.352093 4202 factory.go:55] Registering systemd factory Feb 23 12:59:02.352205 master-0 kubenswrapper[4202]: I0223 12:59:02.352138 4202 factory.go:221] Registration of the systemd container factory successfully Feb 23 12:59:02.352916 master-0 kubenswrapper[4202]: I0223 12:59:02.352826 4202 factory.go:153] Registering CRI-O factory Feb 23 12:59:02.352916 master-0 kubenswrapper[4202]: I0223 12:59:02.352911 4202 factory.go:221] Registration of the crio container factory successfully Feb 23 12:59:02.353087 master-0 kubenswrapper[4202]: I0223 12:59:02.353048 4202 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 12:59:02.353169 master-0 kubenswrapper[4202]: I0223 12:59:02.353099 4202 factory.go:103] Registering Raw factory Feb 23 12:59:02.353169 master-0 kubenswrapper[4202]: I0223 12:59:02.353130 4202 manager.go:1196] Started watching for new ooms in manager Feb 23 12:59:02.353467 master-0 kubenswrapper[4202]: I0223 12:59:02.353399 4202 server.go:449] "Adding debug handlers to kubelet server" Feb 23 12:59:02.354253 master-0 kubenswrapper[4202]: I0223 12:59:02.354211 4202 manager.go:319] Starting recovery of all containers Feb 23 12:59:02.354317 master-0 kubenswrapper[4202]: E0223 12:59:02.354250 4202 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 12:59:02.383923 master-0 kubenswrapper[4202]: I0223 12:59:02.383583 4202 manager.go:324] Recovery completed Feb 23 12:59:02.399478 master-0 kubenswrapper[4202]: I0223 12:59:02.399442 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.401452 master-0 kubenswrapper[4202]: I0223 12:59:02.401404 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.401525 master-0 kubenswrapper[4202]: I0223 12:59:02.401479 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.401525 master-0 kubenswrapper[4202]: I0223 12:59:02.401499 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.403012 master-0 kubenswrapper[4202]: I0223 12:59:02.402981 4202 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 12:59:02.403070 master-0 kubenswrapper[4202]: I0223 12:59:02.403016 4202 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 12:59:02.403070 master-0 kubenswrapper[4202]: I0223 12:59:02.403054 4202 state_mem.go:36] "Initialized new in-memory state store" Feb 23 12:59:02.408533 master-0 kubenswrapper[4202]: I0223 12:59:02.408504 4202 policy_none.go:49] "None policy: Start" Feb 23 12:59:02.409638 master-0 kubenswrapper[4202]: I0223 12:59:02.409612 4202 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 12:59:02.409765 master-0 kubenswrapper[4202]: I0223 12:59:02.409650 4202 state_mem.go:35] "Initializing new in-memory state store" Feb 23 12:59:02.448749 master-0 kubenswrapper[4202]: E0223 12:59:02.448683 4202 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.503814 4202 manager.go:334] "Starting Device Plugin manager" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.504036 4202 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.504051 4202 server.go:79] "Starting device plugin registration server" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.504558 4202 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.504572 4202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.508531 4202 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.508768 4202 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.508790 4202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: E0223 12:59:02.511008 4202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.527709 4202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.531565 4202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.531631 4202 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: I0223 12:59:02.531668 4202 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: E0223 12:59:02.531754 4202 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: W0223 12:59:02.533200 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:02.539605 master-0 kubenswrapper[4202]: E0223 12:59:02.533325 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:02.550024 master-0 kubenswrapper[4202]: E0223 12:59:02.549904 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 23 12:59:02.607113 master-0 kubenswrapper[4202]: I0223 12:59:02.606961 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.609206 master-0 kubenswrapper[4202]: I0223 12:59:02.609129 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.609289 master-0 kubenswrapper[4202]: I0223 12:59:02.609220 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.609289 master-0 kubenswrapper[4202]: I0223 12:59:02.609240 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.609289 master-0 kubenswrapper[4202]: I0223 12:59:02.609290 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:02.610642 master-0 kubenswrapper[4202]: E0223 12:59:02.610574 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:02.632770 master-0 kubenswrapper[4202]: I0223 12:59:02.632633 4202 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 12:59:02.632770 master-0 kubenswrapper[4202]: I0223 12:59:02.632774 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.633823 master-0 kubenswrapper[4202]: I0223 12:59:02.633773 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.633823 master-0 kubenswrapper[4202]: I0223 12:59:02.633822 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.633958 master-0 kubenswrapper[4202]: I0223 12:59:02.633835 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.633958 master-0 kubenswrapper[4202]: I0223 12:59:02.633954 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.634328 master-0 kubenswrapper[4202]: I0223 12:59:02.634272 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.634445 master-0 kubenswrapper[4202]: I0223 12:59:02.634365 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.635018 master-0 kubenswrapper[4202]: I0223 12:59:02.634949 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.635093 master-0 kubenswrapper[4202]: I0223 12:59:02.635026 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.635093 master-0 kubenswrapper[4202]: I0223 12:59:02.635042 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.635280 master-0 kubenswrapper[4202]: I0223 12:59:02.635248 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.635330 master-0 kubenswrapper[4202]: I0223 12:59:02.635299 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.635330 master-0 kubenswrapper[4202]: I0223 12:59:02.635319 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.635330 master-0 kubenswrapper[4202]: I0223 12:59:02.635331 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.635618 master-0 kubenswrapper[4202]: I0223 12:59:02.635542 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.635683 master-0 kubenswrapper[4202]: I0223 12:59:02.635643 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.636415 master-0 kubenswrapper[4202]: I0223 12:59:02.636375 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.636415 master-0 kubenswrapper[4202]: I0223 12:59:02.636403 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.636415 master-0 kubenswrapper[4202]: I0223 12:59:02.636413 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.636565 master-0 kubenswrapper[4202]: I0223 12:59:02.636517 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.637212 master-0 kubenswrapper[4202]: I0223 12:59:02.636612 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.637212 master-0 kubenswrapper[4202]: I0223 12:59:02.636662 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.637212 master-0 kubenswrapper[4202]: I0223 12:59:02.636968 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.637212 master-0 kubenswrapper[4202]: I0223 12:59:02.637045 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.637212 master-0 kubenswrapper[4202]: I0223 12:59:02.637058 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.637900 master-0 kubenswrapper[4202]: I0223 12:59:02.637869 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.637975 master-0 kubenswrapper[4202]: I0223 12:59:02.637904 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.637975 master-0 kubenswrapper[4202]: I0223 12:59:02.637919 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.637975 master-0 kubenswrapper[4202]: I0223 12:59:02.637930 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.637975 master-0 kubenswrapper[4202]: I0223 12:59:02.637970 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.638200 master-0 kubenswrapper[4202]: I0223 12:59:02.637988 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.638200 master-0 kubenswrapper[4202]: I0223 12:59:02.638128 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.638200 master-0 kubenswrapper[4202]: I0223 12:59:02.638171 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.638404 master-0 kubenswrapper[4202]: I0223 12:59:02.638210 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.639395 master-0 kubenswrapper[4202]: I0223 12:59:02.639314 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.639395 master-0 kubenswrapper[4202]: I0223 12:59:02.639361 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.639395 master-0 kubenswrapper[4202]: I0223 12:59:02.639384 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.639395 master-0 kubenswrapper[4202]: I0223 12:59:02.639384 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.639811 master-0 kubenswrapper[4202]: I0223 12:59:02.639430 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.639811 master-0 kubenswrapper[4202]: I0223 12:59:02.639400 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.639811 master-0 kubenswrapper[4202]: I0223 12:59:02.639617 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.639811 master-0 kubenswrapper[4202]: I0223 12:59:02.639650 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.640595 master-0 kubenswrapper[4202]: I0223 12:59:02.640548 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.640595 master-0 kubenswrapper[4202]: I0223 12:59:02.640586 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.640595 master-0 kubenswrapper[4202]: I0223 12:59:02.640602 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.647833 master-0 kubenswrapper[4202]: I0223 12:59:02.647773 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.647833 master-0 kubenswrapper[4202]: I0223 12:59:02.647823 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.648035 master-0 kubenswrapper[4202]: I0223 12:59:02.647866 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.648035 master-0 kubenswrapper[4202]: I0223 12:59:02.647911 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.648035 master-0 kubenswrapper[4202]: I0223 12:59:02.647950 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.648035 master-0 kubenswrapper[4202]: I0223 12:59:02.647984 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648043 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648089 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648115 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648141 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648168 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648235 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.648266 master-0 kubenswrapper[4202]: I0223 12:59:02.648263 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.648708 master-0 kubenswrapper[4202]: I0223 12:59:02.648289 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.648708 master-0 kubenswrapper[4202]: I0223 12:59:02.648314 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.648708 master-0 kubenswrapper[4202]: I0223 12:59:02.648355 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.648708 master-0 kubenswrapper[4202]: I0223 12:59:02.648383 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.748755 master-0 kubenswrapper[4202]: I0223 12:59:02.748638 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.748755 master-0 kubenswrapper[4202]: I0223 12:59:02.748725 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.749106 master-0 kubenswrapper[4202]: I0223 12:59:02.749032 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749239 master-0 kubenswrapper[4202]: I0223 12:59:02.749156 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749311 master-0 kubenswrapper[4202]: I0223 12:59:02.749242 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.749413 master-0 kubenswrapper[4202]: I0223 12:59:02.749263 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749413 master-0 kubenswrapper[4202]: I0223 12:59:02.749280 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749543 master-0 kubenswrapper[4202]: I0223 12:59:02.749413 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749543 master-0 kubenswrapper[4202]: I0223 12:59:02.749443 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749658 master-0 kubenswrapper[4202]: I0223 12:59:02.749536 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.749658 master-0 kubenswrapper[4202]: I0223 12:59:02.749603 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.749658 master-0 kubenswrapper[4202]: I0223 12:59:02.749648 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749688 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749723 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749750 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749759 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749763 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.749831 master-0 kubenswrapper[4202]: I0223 12:59:02.749826 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.749853 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.749855 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.749933 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.749986 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750002 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750036 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750075 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750080 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750092 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750109 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750129 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750149 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:02.750173 master-0 kubenswrapper[4202]: I0223 12:59:02.750173 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.751547 master-0 kubenswrapper[4202]: I0223 12:59:02.750228 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.751547 master-0 kubenswrapper[4202]: I0223 12:59:02.750271 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.751547 master-0 kubenswrapper[4202]: I0223 12:59:02.750299 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:02.759490 master-0 kubenswrapper[4202]: I0223 12:59:02.759392 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:59:02.811160 master-0 kubenswrapper[4202]: I0223 12:59:02.811072 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:02.812981 master-0 kubenswrapper[4202]: I0223 12:59:02.812871 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:02.812981 master-0 kubenswrapper[4202]: I0223 12:59:02.812992 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:02.812981 master-0 kubenswrapper[4202]: I0223 12:59:02.813012 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:02.812981 master-0 kubenswrapper[4202]: I0223 12:59:02.813130 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:02.814617 master-0 kubenswrapper[4202]: E0223 12:59:02.814519 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:02.952840 master-0 kubenswrapper[4202]: E0223 12:59:02.952557 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 23 12:59:02.978922 master-0 kubenswrapper[4202]: I0223 12:59:02.978698 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:59:02.993588 master-0 kubenswrapper[4202]: I0223 12:59:02.993475 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:59:03.004212 master-0 kubenswrapper[4202]: I0223 12:59:03.004122 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:03.044768 master-0 kubenswrapper[4202]: I0223 12:59:03.044638 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:03.192262 master-0 kubenswrapper[4202]: W0223 12:59:03.192124 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:03.193786 master-0 kubenswrapper[4202]: E0223 12:59:03.192275 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:03.213490 master-0 kubenswrapper[4202]: W0223 12:59:03.213140 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:03.213490 master-0 kubenswrapper[4202]: E0223 12:59:03.213313 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:03.214705 master-0 kubenswrapper[4202]: I0223 12:59:03.214659 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:03.216561 master-0 kubenswrapper[4202]: I0223 12:59:03.216488 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:03.216561 master-0 kubenswrapper[4202]: I0223 12:59:03.216549 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:03.216561 master-0 kubenswrapper[4202]: I0223 12:59:03.216572 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:03.217062 master-0 kubenswrapper[4202]: I0223 12:59:03.216728 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:03.217752 master-0 kubenswrapper[4202]: E0223 12:59:03.217700 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:03.337707 master-0 kubenswrapper[4202]: I0223 12:59:03.337611 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:03.602045 master-0 kubenswrapper[4202]: W0223 12:59:03.601934 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278 WatchSource:0}: Error finding container 0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278: Status 404 returned error can't find the container with id 0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278 Feb 23 12:59:03.612221 master-0 kubenswrapper[4202]: I0223 12:59:03.612171 4202 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 12:59:03.657924 master-0 kubenswrapper[4202]: W0223 12:59:03.657809 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72 WatchSource:0}: Error finding container 67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72: Status 404 returned error can't find the container with id 67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72 Feb 23 12:59:03.667930 master-0 kubenswrapper[4202]: W0223 12:59:03.667475 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271 WatchSource:0}: Error finding container dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271: Status 404 returned error can't find the container with id dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271 Feb 23 12:59:03.697245 master-0 kubenswrapper[4202]: W0223 12:59:03.697150 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14 WatchSource:0}: Error finding container 1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14: Status 404 returned error can't find the container with id 1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14 Feb 23 12:59:03.754523 master-0 kubenswrapper[4202]: E0223 12:59:03.754399 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 23 12:59:03.861680 master-0 kubenswrapper[4202]: W0223 12:59:03.861410 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:03.861680 master-0 kubenswrapper[4202]: E0223 12:59:03.861544 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:04.018834 master-0 kubenswrapper[4202]: I0223 12:59:04.018703 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:04.020088 master-0 kubenswrapper[4202]: I0223 12:59:04.020033 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:04.020210 master-0 kubenswrapper[4202]: I0223 12:59:04.020103 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:04.020210 master-0 kubenswrapper[4202]: I0223 12:59:04.020129 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:04.020368 master-0 kubenswrapper[4202]: I0223 12:59:04.020221 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:04.021506 master-0 kubenswrapper[4202]: E0223 12:59:04.021429 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:04.120651 master-0 kubenswrapper[4202]: W0223 12:59:04.120464 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:04.120651 master-0 kubenswrapper[4202]: E0223 12:59:04.120607 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:04.288667 master-0 kubenswrapper[4202]: I0223 12:59:04.288585 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:59:04.290531 master-0 kubenswrapper[4202]: E0223 12:59:04.290430 4202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:04.336986 master-0 kubenswrapper[4202]: I0223 12:59:04.336940 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:04.541101 master-0 kubenswrapper[4202]: I0223 12:59:04.540957 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7"} Feb 23 12:59:04.542923 master-0 kubenswrapper[4202]: I0223 12:59:04.542862 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278"} Feb 23 12:59:04.544045 master-0 kubenswrapper[4202]: I0223 12:59:04.543963 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14"} Feb 23 12:59:04.545230 master-0 kubenswrapper[4202]: I0223 12:59:04.545123 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271"} Feb 23 12:59:04.546251 master-0 kubenswrapper[4202]: I0223 12:59:04.546204 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72"} Feb 23 12:59:05.335011 master-0 kubenswrapper[4202]: E0223 12:59:05.334818 4202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1896e1970f80a868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,LastTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:05.337709 master-0 kubenswrapper[4202]: I0223 12:59:05.337651 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:05.355815 master-0 kubenswrapper[4202]: E0223 12:59:05.355737 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 23 12:59:05.391153 master-0 kubenswrapper[4202]: W0223 12:59:05.391022 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:05.391153 master-0 kubenswrapper[4202]: E0223 12:59:05.391103 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:05.621937 master-0 kubenswrapper[4202]: I0223 12:59:05.621811 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:05.623092 master-0 kubenswrapper[4202]: I0223 12:59:05.623055 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:05.623166 master-0 kubenswrapper[4202]: I0223 12:59:05.623099 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:05.623166 master-0 kubenswrapper[4202]: I0223 12:59:05.623112 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:05.623166 master-0 kubenswrapper[4202]: I0223 12:59:05.623163 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:05.624063 master-0 kubenswrapper[4202]: E0223 12:59:05.624018 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:06.007440 master-0 kubenswrapper[4202]: W0223 12:59:06.007342 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:06.007641 master-0 kubenswrapper[4202]: E0223 12:59:06.007470 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:06.197481 master-0 kubenswrapper[4202]: W0223 12:59:06.197366 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:06.197481 master-0 kubenswrapper[4202]: E0223 12:59:06.197447 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:06.206497 master-0 kubenswrapper[4202]: W0223 12:59:06.206426 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:06.206497 master-0 kubenswrapper[4202]: E0223 12:59:06.206470 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:06.339090 master-0 kubenswrapper[4202]: I0223 12:59:06.337056 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:06.553152 master-0 kubenswrapper[4202]: I0223 12:59:06.553090 4202 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563" exitCode=0 Feb 23 12:59:06.553359 master-0 kubenswrapper[4202]: I0223 12:59:06.553192 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563"} Feb 23 12:59:06.553436 master-0 kubenswrapper[4202]: I0223 12:59:06.553391 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:06.554558 master-0 kubenswrapper[4202]: I0223 12:59:06.554519 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:06.554558 master-0 kubenswrapper[4202]: I0223 12:59:06.554555 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:06.554652 master-0 kubenswrapper[4202]: I0223 12:59:06.554570 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:06.557309 master-0 kubenswrapper[4202]: I0223 12:59:06.557283 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7"} Feb 23 12:59:07.338029 master-0 kubenswrapper[4202]: I0223 12:59:07.337975 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:07.561500 master-0 kubenswrapper[4202]: I0223 12:59:07.561431 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4"} Feb 23 12:59:07.561500 master-0 kubenswrapper[4202]: I0223 12:59:07.561492 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:07.562509 master-0 kubenswrapper[4202]: I0223 12:59:07.562486 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:07.562581 master-0 kubenswrapper[4202]: I0223 12:59:07.562525 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:07.562581 master-0 kubenswrapper[4202]: I0223 12:59:07.562537 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:07.563563 master-0 kubenswrapper[4202]: I0223 12:59:07.563539 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 23 12:59:07.564117 master-0 kubenswrapper[4202]: I0223 12:59:07.564083 4202 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="03cf9834655e910bd5662d1c53133f59983430a4f8915b26c194b7096ab4cbb3" exitCode=1 Feb 23 12:59:07.564157 master-0 kubenswrapper[4202]: I0223 12:59:07.564122 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"03cf9834655e910bd5662d1c53133f59983430a4f8915b26c194b7096ab4cbb3"} Feb 23 12:59:07.564199 master-0 kubenswrapper[4202]: I0223 12:59:07.564184 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:07.565493 master-0 kubenswrapper[4202]: I0223 12:59:07.565468 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:07.565538 master-0 kubenswrapper[4202]: I0223 12:59:07.565504 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:07.565538 master-0 kubenswrapper[4202]: I0223 12:59:07.565515 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:07.565850 master-0 kubenswrapper[4202]: I0223 12:59:07.565821 4202 scope.go:117] "RemoveContainer" containerID="03cf9834655e910bd5662d1c53133f59983430a4f8915b26c194b7096ab4cbb3" Feb 23 12:59:08.337918 master-0 kubenswrapper[4202]: I0223 12:59:08.337834 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:08.516305 master-0 kubenswrapper[4202]: I0223 12:59:08.515406 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:59:08.516908 master-0 kubenswrapper[4202]: E0223 12:59:08.516863 4202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:08.556860 master-0 kubenswrapper[4202]: E0223 12:59:08.556801 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 23 12:59:08.567812 master-0 kubenswrapper[4202]: I0223 12:59:08.567771 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:59:08.568227 master-0 kubenswrapper[4202]: I0223 12:59:08.568195 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 23 12:59:08.568611 master-0 kubenswrapper[4202]: I0223 12:59:08.568566 4202 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a" exitCode=1 Feb 23 12:59:08.568712 master-0 kubenswrapper[4202]: I0223 12:59:08.568686 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:08.569168 master-0 kubenswrapper[4202]: I0223 12:59:08.569138 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:08.569439 master-0 kubenswrapper[4202]: I0223 12:59:08.569403 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a"} Feb 23 12:59:08.569483 master-0 kubenswrapper[4202]: I0223 12:59:08.569470 4202 scope.go:117] "RemoveContainer" containerID="03cf9834655e910bd5662d1c53133f59983430a4f8915b26c194b7096ab4cbb3" Feb 23 12:59:08.570103 master-0 kubenswrapper[4202]: I0223 12:59:08.570075 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:08.570103 master-0 kubenswrapper[4202]: I0223 12:59:08.570100 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:08.570165 master-0 kubenswrapper[4202]: I0223 12:59:08.570109 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:08.570429 master-0 kubenswrapper[4202]: I0223 12:59:08.570405 4202 scope.go:117] "RemoveContainer" containerID="ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a" Feb 23 12:59:08.570569 master-0 kubenswrapper[4202]: E0223 12:59:08.570540 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:08.570611 master-0 kubenswrapper[4202]: I0223 12:59:08.570600 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:08.570642 master-0 kubenswrapper[4202]: I0223 12:59:08.570614 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:08.570642 master-0 kubenswrapper[4202]: I0223 12:59:08.570623 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:08.825192 master-0 kubenswrapper[4202]: I0223 12:59:08.825125 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:08.826496 master-0 kubenswrapper[4202]: I0223 12:59:08.826468 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:08.826562 master-0 kubenswrapper[4202]: I0223 12:59:08.826507 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:08.826562 master-0 kubenswrapper[4202]: I0223 12:59:08.826518 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:08.826621 master-0 kubenswrapper[4202]: I0223 12:59:08.826581 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:08.827929 master-0 kubenswrapper[4202]: E0223 12:59:08.827862 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:59:09.217214 master-0 kubenswrapper[4202]: W0223 12:59:09.217006 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:09.217214 master-0 kubenswrapper[4202]: E0223 12:59:09.217115 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:09.338175 master-0 kubenswrapper[4202]: I0223 12:59:09.338083 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:09.570367 master-0 kubenswrapper[4202]: I0223 12:59:09.570315 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:09.571653 master-0 kubenswrapper[4202]: I0223 12:59:09.571617 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:09.571710 master-0 kubenswrapper[4202]: I0223 12:59:09.571676 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:09.571710 master-0 kubenswrapper[4202]: I0223 12:59:09.571701 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:09.572288 master-0 kubenswrapper[4202]: I0223 12:59:09.572261 4202 scope.go:117] "RemoveContainer" containerID="ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a" Feb 23 12:59:09.572592 master-0 kubenswrapper[4202]: E0223 12:59:09.572549 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:10.338250 master-0 kubenswrapper[4202]: I0223 12:59:10.338165 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:10.483788 master-0 kubenswrapper[4202]: W0223 12:59:10.483691 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:10.483788 master-0 kubenswrapper[4202]: E0223 12:59:10.483787 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:11.338713 master-0 kubenswrapper[4202]: I0223 12:59:11.338106 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:11.576674 master-0 kubenswrapper[4202]: I0223 12:59:11.576576 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:59:11.578978 master-0 kubenswrapper[4202]: I0223 12:59:11.578609 4202 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0" exitCode=0 Feb 23 12:59:11.578978 master-0 kubenswrapper[4202]: I0223 12:59:11.578694 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0"} Feb 23 12:59:11.578978 master-0 kubenswrapper[4202]: I0223 12:59:11.578749 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:11.579716 master-0 kubenswrapper[4202]: I0223 12:59:11.579678 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:11.579831 master-0 kubenswrapper[4202]: I0223 12:59:11.579724 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:11.579831 master-0 kubenswrapper[4202]: I0223 12:59:11.579737 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:11.581417 master-0 kubenswrapper[4202]: I0223 12:59:11.581371 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09"} Feb 23 12:59:11.583398 master-0 kubenswrapper[4202]: I0223 12:59:11.583368 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:11.583515 master-0 kubenswrapper[4202]: I0223 12:59:11.583401 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5"} Feb 23 12:59:11.583515 master-0 kubenswrapper[4202]: I0223 12:59:11.583499 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:11.583940 master-0 kubenswrapper[4202]: I0223 12:59:11.583902 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:11.583940 master-0 kubenswrapper[4202]: I0223 12:59:11.583924 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:11.583940 master-0 kubenswrapper[4202]: I0223 12:59:11.583938 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:11.584311 master-0 kubenswrapper[4202]: I0223 12:59:11.584244 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:11.584311 master-0 kubenswrapper[4202]: I0223 12:59:11.584270 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:11.584311 master-0 kubenswrapper[4202]: I0223 12:59:11.584282 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:11.646301 master-0 kubenswrapper[4202]: W0223 12:59:11.646129 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:59:11.646469 master-0 kubenswrapper[4202]: E0223 12:59:11.646350 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:59:12.511646 master-0 kubenswrapper[4202]: E0223 12:59:12.511563 4202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:12.589721 master-0 kubenswrapper[4202]: I0223 12:59:12.589643 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8"} Feb 23 12:59:12.592772 master-0 kubenswrapper[4202]: I0223 12:59:12.592245 4202 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09" exitCode=1 Feb 23 12:59:12.592772 master-0 kubenswrapper[4202]: I0223 12:59:12.592415 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:12.592854 master-0 kubenswrapper[4202]: I0223 12:59:12.592785 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09"} Feb 23 12:59:12.594784 master-0 kubenswrapper[4202]: I0223 12:59:12.594209 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:12.594784 master-0 kubenswrapper[4202]: I0223 12:59:12.594260 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:12.594784 master-0 kubenswrapper[4202]: I0223 12:59:12.594270 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:13.382890 master-0 kubenswrapper[4202]: W0223 12:59:13.382720 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 12:59:13.382890 master-0 kubenswrapper[4202]: E0223 12:59:13.382781 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:13.382890 master-0 kubenswrapper[4202]: I0223 12:59:13.382853 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:14.341367 master-0 kubenswrapper[4202]: I0223 12:59:14.341302 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:14.604171 master-0 kubenswrapper[4202]: I0223 12:59:14.604028 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab"} Feb 23 12:59:14.604171 master-0 kubenswrapper[4202]: I0223 12:59:14.604144 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:14.605168 master-0 kubenswrapper[4202]: I0223 12:59:14.605130 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:14.605214 master-0 kubenswrapper[4202]: I0223 12:59:14.605171 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:14.605214 master-0 kubenswrapper[4202]: I0223 12:59:14.605185 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:14.606054 master-0 kubenswrapper[4202]: I0223 12:59:14.606025 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54"} Feb 23 12:59:14.606118 master-0 kubenswrapper[4202]: I0223 12:59:14.606088 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:14.606809 master-0 kubenswrapper[4202]: I0223 12:59:14.606784 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:14.606852 master-0 kubenswrapper[4202]: I0223 12:59:14.606812 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:14.606852 master-0 kubenswrapper[4202]: I0223 12:59:14.606821 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:14.607015 master-0 kubenswrapper[4202]: I0223 12:59:14.606992 4202 scope.go:117] "RemoveContainer" containerID="18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09" Feb 23 12:59:14.965882 master-0 kubenswrapper[4202]: E0223 12:59:14.965671 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:59:15.088695 master-0 kubenswrapper[4202]: I0223 12:59:15.088591 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:15.229195 master-0 kubenswrapper[4202]: I0223 12:59:15.228980 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:15.230912 master-0 kubenswrapper[4202]: I0223 12:59:15.230831 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:15.230912 master-0 kubenswrapper[4202]: I0223 12:59:15.230915 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:15.231025 master-0 kubenswrapper[4202]: I0223 12:59:15.230926 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:15.231025 master-0 kubenswrapper[4202]: I0223 12:59:15.230999 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:15.240193 master-0 kubenswrapper[4202]: E0223 12:59:15.240119 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:59:15.320830 master-0 kubenswrapper[4202]: I0223 12:59:15.320731 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:15.340733 master-0 kubenswrapper[4202]: E0223 12:59:15.340545 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1970f80a868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,LastTimestamp:2026-02-23 12:59:02.33327012 +0000 UTC m=+0.622131758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.341058 master-0 kubenswrapper[4202]: I0223 12:59:15.340740 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:15.342790 master-0 kubenswrapper[4202]: E0223 12:59:15.342657 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.345736 master-0 kubenswrapper[4202]: E0223 12:59:15.345548 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.350492 master-0 kubenswrapper[4202]: E0223 12:59:15.350009 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.353676 master-0 kubenswrapper[4202]: E0223 12:59:15.353533 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971af89e39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.525681209 +0000 UTC m=+0.814542877,LastTimestamp:2026-02-23 12:59:02.525681209 +0000 UTC m=+0.814542877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.419171 master-0 kubenswrapper[4202]: E0223 12:59:15.418997 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.609194249 +0000 UTC m=+0.898055927,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.426782 master-0 kubenswrapper[4202]: E0223 12:59:15.426566 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.609233241 +0000 UTC m=+0.898094909,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.432254 master-0 kubenswrapper[4202]: E0223 12:59:15.432121 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.609250492 +0000 UTC m=+0.898112150,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.438838 master-0 kubenswrapper[4202]: E0223 12:59:15.438681 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.633804769 +0000 UTC m=+0.922666417,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.443544 master-0 kubenswrapper[4202]: E0223 12:59:15.443402 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.63383069 +0000 UTC m=+0.922692328,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.447959 master-0 kubenswrapper[4202]: E0223 12:59:15.447836 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.633843721 +0000 UTC m=+0.922705359,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.452577 master-0 kubenswrapper[4202]: E0223 12:59:15.452385 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.635003349 +0000 UTC m=+0.923864987,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.457989 master-0 kubenswrapper[4202]: E0223 12:59:15.457855 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.63503696 +0000 UTC m=+0.923898598,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.462449 master-0 kubenswrapper[4202]: E0223 12:59:15.462348 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.635049291 +0000 UTC m=+0.923910929,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.466202 master-0 kubenswrapper[4202]: E0223 12:59:15.466045 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.635312504 +0000 UTC m=+0.924174152,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.472510 master-0 kubenswrapper[4202]: E0223 12:59:15.472383 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.635326684 +0000 UTC m=+0.924188322,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.490949 master-0 kubenswrapper[4202]: E0223 12:59:15.490589 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.635357586 +0000 UTC m=+0.924219234,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.498548 master-0 kubenswrapper[4202]: E0223 12:59:15.498325 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.636398068 +0000 UTC m=+0.925259716,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.504252 master-0 kubenswrapper[4202]: E0223 12:59:15.504094 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.636409498 +0000 UTC m=+0.925271136,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.511113 master-0 kubenswrapper[4202]: E0223 12:59:15.510962 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.636419819 +0000 UTC m=+0.925281447,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.516822 master-0 kubenswrapper[4202]: E0223 12:59:15.516043 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.637013558 +0000 UTC m=+0.925875196,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.522826 master-0 kubenswrapper[4202]: E0223 12:59:15.522731 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.63705348 +0000 UTC m=+0.925915118,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.530543 master-0 kubenswrapper[4202]: E0223 12:59:15.530254 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391f4a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391f4a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401512616 +0000 UTC m=+0.690374274,LastTimestamp:2026-02-23 12:59:02.63706605 +0000 UTC m=+0.925927688,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.537740 master-0 kubenswrapper[4202]: E0223 12:59:15.537574 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19713911249\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19713911249 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401454665 +0000 UTC m=+0.690316333,LastTimestamp:2026-02-23 12:59:02.637890102 +0000 UTC m=+0.926751740,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.544826 master-0 kubenswrapper[4202]: E0223 12:59:15.544646 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e1971391a66a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1971391a66a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:02.401492586 +0000 UTC m=+0.690354254,LastTimestamp:2026-02-23 12:59:02.637914883 +0000 UTC m=+0.926776521,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.555380 master-0 kubenswrapper[4202]: E0223 12:59:15.555149 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e1975bba083d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:03.612098621 +0000 UTC m=+1.900960259,LastTimestamp:2026-02-23 12:59:03.612098621 +0000 UTC m=+1.900960259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.563253 master-0 kubenswrapper[4202]: E0223 12:59:15.563088 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1975d3a98de kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:03.63730147 +0000 UTC m=+1.926163108,LastTimestamp:2026-02-23 12:59:03.63730147 +0000 UTC m=+1.926163108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.570132 master-0 kubenswrapper[4202]: E0223 12:59:15.570005 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1975eb9bf4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:03.662411594 +0000 UTC m=+1.951273262,LastTimestamp:2026-02-23 12:59:03.662411594 +0000 UTC m=+1.951273262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.575358 master-0 kubenswrapper[4202]: E0223 12:59:15.575260 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1975f9359b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:03.676672441 +0000 UTC m=+1.965534079,LastTimestamp:2026-02-23 12:59:03.676672441 +0000 UTC m=+1.965534079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.582022 master-0 kubenswrapper[4202]: E0223 12:59:15.581950 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e197610b926c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:03.701328492 +0000 UTC m=+1.990190160,LastTimestamp:2026-02-23 12:59:03.701328492 +0000 UTC m=+1.990190160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.588753 master-0 kubenswrapper[4202]: E0223 12:59:15.588659 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e197cb68d6bc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 1.809s (1.809s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:05.485825724 +0000 UTC m=+3.774687352,LastTimestamp:2026-02-23 12:59:05.485825724 +0000 UTC m=+3.774687352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.596887 master-0 kubenswrapper[4202]: E0223 12:59:15.596807 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e197f4d9ba6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 2.479s (2.479s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.181089902 +0000 UTC m=+4.469951520,LastTimestamp:2026-02-23 12:59:06.181089902 +0000 UTC m=+4.469951520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.601663 master-0 kubenswrapper[4202]: E0223 12:59:15.601566 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e197ffd3a700 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.365241088 +0000 UTC m=+4.654102716,LastTimestamp:2026-02-23 12:59:06.365241088 +0000 UTC m=+4.654102716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.607739 master-0 kubenswrapper[4202]: E0223 12:59:15.607644 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e197fffb29e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.367830505 +0000 UTC m=+4.656692133,LastTimestamp:2026-02-23 12:59:06.367830505 +0000 UTC m=+4.656692133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.611084 master-0 kubenswrapper[4202]: I0223 12:59:15.611028 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d"} Feb 23 12:59:15.611176 master-0 kubenswrapper[4202]: I0223 12:59:15.611086 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:15.611226 master-0 kubenswrapper[4202]: I0223 12:59:15.611197 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:15.611789 master-0 kubenswrapper[4202]: I0223 12:59:15.611746 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:15.611789 master-0 kubenswrapper[4202]: I0223 12:59:15.611787 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:15.611914 master-0 kubenswrapper[4202]: I0223 12:59:15.611801 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:15.612542 master-0 kubenswrapper[4202]: I0223 12:59:15.612494 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:15.612542 master-0 kubenswrapper[4202]: I0223 12:59:15.612529 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:15.612542 master-0 kubenswrapper[4202]: I0223 12:59:15.612538 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:15.614391 master-0 kubenswrapper[4202]: E0223 12:59:15.614277 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e19800e7212e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.383294766 +0000 UTC m=+4.672156394,LastTimestamp:2026-02-23 12:59:06.383294766 +0000 UTC m=+4.672156394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.621264 master-0 kubenswrapper[4202]: E0223 12:59:15.621039 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e19800f3e581 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.384131457 +0000 UTC m=+4.672993085,LastTimestamp:2026-02-23 12:59:06.384131457 +0000 UTC m=+4.672993085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.628159 master-0 kubenswrapper[4202]: E0223 12:59:15.627798 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e198011f0d54 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.3869597 +0000 UTC m=+4.675821328,LastTimestamp:2026-02-23 12:59:06.3869597 +0000 UTC m=+4.675821328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.635810 master-0 kubenswrapper[4202]: E0223 12:59:15.635686 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1980b62cb18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.559171352 +0000 UTC m=+4.848032980,LastTimestamp:2026-02-23 12:59:06.559171352 +0000 UTC m=+4.848032980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.642012 master-0 kubenswrapper[4202]: E0223 12:59:15.641894 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e1980c56771f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.575140639 +0000 UTC m=+4.864002267,LastTimestamp:2026-02-23 12:59:06.575140639 +0000 UTC m=+4.864002267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.647790 master-0 kubenswrapper[4202]: E0223 12:59:15.647691 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e1980dca58c6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.599512262 +0000 UTC m=+4.888373910,LastTimestamp:2026-02-23 12:59:06.599512262 +0000 UTC m=+4.888373910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.653714 master-0 kubenswrapper[4202]: E0223 12:59:15.653600 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e19818929a1b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.780408347 +0000 UTC m=+5.069269975,LastTimestamp:2026-02-23 12:59:06.780408347 +0000 UTC m=+5.069269975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.660277 master-0 kubenswrapper[4202]: E0223 12:59:15.660191 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198195c3547 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.793620807 +0000 UTC m=+5.082482435,LastTimestamp:2026-02-23 12:59:06.793620807 +0000 UTC m=+5.082482435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.665277 master-0 kubenswrapper[4202]: E0223 12:59:15.665193 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1980b62cb18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1980b62cb18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.559171352 +0000 UTC m=+4.848032980,LastTimestamp:2026-02-23 12:59:07.56850757 +0000 UTC m=+5.857369198,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.670489 master-0 kubenswrapper[4202]: E0223 12:59:15.670420 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e19818929a1b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e19818929a1b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.780408347 +0000 UTC m=+5.069269975,LastTimestamp:2026-02-23 12:59:07.762901242 +0000 UTC m=+6.051762870,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.675032 master-0 kubenswrapper[4202]: E0223 12:59:15.674961 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e198195c3547\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198195c3547 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.793620807 +0000 UTC m=+5.082482435,LastTimestamp:2026-02-23 12:59:07.772793643 +0000 UTC m=+6.061655271,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.681305 master-0 kubenswrapper[4202]: E0223 12:59:15.681217 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198834583f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:08.570518513 +0000 UTC m=+6.859380131,LastTimestamp:2026-02-23 12:59:08.570518513 +0000 UTC m=+6.859380131,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.688972 master-0 kubenswrapper[4202]: E0223 12:59:15.688887 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e198834583f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198834583f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:08.570518513 +0000 UTC m=+6.859380131,LastTimestamp:2026-02-23 12:59:09.572495434 +0000 UTC m=+7.861357092,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.693979 master-0 kubenswrapper[4202]: E0223 12:59:15.693904 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1990d200850 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.245s (7.245s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:10.88331784 +0000 UTC m=+9.172179478,LastTimestamp:2026-02-23 12:59:10.88331784 +0000 UTC m=+9.172179478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.700518 master-0 kubenswrapper[4202]: E0223 12:59:15.700313 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e1990f7b4a82 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.31s (7.31s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:10.922852994 +0000 UTC m=+9.211714622,LastTimestamp:2026-02-23 12:59:10.922852994 +0000 UTC m=+9.211714622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.707325 master-0 kubenswrapper[4202]: E0223 12:59:15.707229 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1990fc91509 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.265s (7.265s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:10.927951113 +0000 UTC m=+9.216812761,LastTimestamp:2026-02-23 12:59:10.927951113 +0000 UTC m=+9.216812761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.712403 master-0 kubenswrapper[4202]: E0223 12:59:15.712307 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1991bb1f299 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.127761561 +0000 UTC m=+9.416623229,LastTimestamp:2026-02-23 12:59:11.127761561 +0000 UTC m=+9.416623229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.718011 master-0 kubenswrapper[4202]: E0223 12:59:15.717918 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e1991bb5c2bf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.128011455 +0000 UTC m=+9.416873113,LastTimestamp:2026-02-23 12:59:11.128011455 +0000 UTC m=+9.416873113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.722623 master-0 kubenswrapper[4202]: E0223 12:59:15.722533 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e1991c47f693 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.137592979 +0000 UTC m=+9.426454607,LastTimestamp:2026-02-23 12:59:11.137592979 +0000 UTC m=+9.426454607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.727239 master-0 kubenswrapper[4202]: E0223 12:59:15.727154 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1991c9a0961 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.142971745 +0000 UTC m=+9.431833403,LastTimestamp:2026-02-23 12:59:11.142971745 +0000 UTC m=+9.431833403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.733880 master-0 kubenswrapper[4202]: E0223 12:59:15.733794 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1991cabf00f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.144144911 +0000 UTC m=+9.433006539,LastTimestamp:2026-02-23 12:59:11.144144911 +0000 UTC m=+9.433006539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.739693 master-0 kubenswrapper[4202]: E0223 12:59:15.739585 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1991dc1fd8f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.162367375 +0000 UTC m=+9.451229013,LastTimestamp:2026-02-23 12:59:11.162367375 +0000 UTC m=+9.451229013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.745942 master-0 kubenswrapper[4202]: E0223 12:59:15.745763 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1991eb78064 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.178457188 +0000 UTC m=+9.467318816,LastTimestamp:2026-02-23 12:59:11.178457188 +0000 UTC m=+9.467318816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.750742 master-0 kubenswrapper[4202]: E0223 12:59:15.750642 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e19936d8db43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.583296323 +0000 UTC m=+9.872157951,LastTimestamp:2026-02-23 12:59:11.583296323 +0000 UTC m=+9.872157951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.757391 master-0 kubenswrapper[4202]: E0223 12:59:15.757285 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e199435ccb6f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.793269615 +0000 UTC m=+10.082131273,LastTimestamp:2026-02-23 12:59:11.793269615 +0000 UTC m=+10.082131273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.763931 master-0 kubenswrapper[4202]: E0223 12:59:15.763856 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e19944174d8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.80549262 +0000 UTC m=+10.094354288,LastTimestamp:2026-02-23 12:59:11.80549262 +0000 UTC m=+10.094354288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.770895 master-0 kubenswrapper[4202]: E0223 12:59:15.770729 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1994453e058 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.80946236 +0000 UTC m=+10.098324008,LastTimestamp:2026-02-23 12:59:11.80946236 +0000 UTC m=+10.098324008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.777404 master-0 kubenswrapper[4202]: E0223 12:59:15.777226 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e199c7b3bcce kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 2.869s (2.869s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.013560014 +0000 UTC m=+12.302421642,LastTimestamp:2026-02-23 12:59:14.013560014 +0000 UTC m=+12.302421642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.784464 master-0 kubenswrapper[4202]: E0223 12:59:15.784326 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e199c992c220 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 2.235s (2.235s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.04495312 +0000 UTC m=+12.333814748,LastTimestamp:2026-02-23 12:59:14.04495312 +0000 UTC m=+12.333814748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.793999 master-0 kubenswrapper[4202]: E0223 12:59:15.793729 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e199d5988d97 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.246659479 +0000 UTC m=+12.535521107,LastTimestamp:2026-02-23 12:59:14.246659479 +0000 UTC m=+12.535521107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.800718 master-0 kubenswrapper[4202]: E0223 12:59:15.800514 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e199d62d4909 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.256406793 +0000 UTC m=+12.545268421,LastTimestamp:2026-02-23 12:59:14.256406793 +0000 UTC m=+12.545268421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.808623 master-0 kubenswrapper[4202]: E0223 12:59:15.808452 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e199d723c69d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.272560797 +0000 UTC m=+12.561422425,LastTimestamp:2026-02-23 12:59:14.272560797 +0000 UTC m=+12.561422425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.816266 master-0 kubenswrapper[4202]: E0223 12:59:15.815901 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e199d7244812 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.272593938 +0000 UTC m=+12.561455566,LastTimestamp:2026-02-23 12:59:14.272593938 +0000 UTC m=+12.561455566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.824521 master-0 kubenswrapper[4202]: E0223 12:59:15.824288 4202 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e199eb2f9d29 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:14.608880937 +0000 UTC m=+12.897742565,LastTimestamp:2026-02-23 12:59:14.608880937 +0000 UTC m=+12.897742565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.832722 master-0 kubenswrapper[4202]: E0223 12:59:15.832546 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1896e1991bb1f299\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1991bb1f299 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.127761561 +0000 UTC m=+9.416623229,LastTimestamp:2026-02-23 12:59:14.989495204 +0000 UTC m=+13.278356872,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:15.840504 master-0 kubenswrapper[4202]: E0223 12:59:15.840329 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1896e1991c9a0961\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1991c9a0961 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:11.142971745 +0000 UTC m=+9.431833403,LastTimestamp:2026-02-23 12:59:15.1307746 +0000 UTC m=+13.419636228,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:16.349659 master-0 kubenswrapper[4202]: I0223 12:59:16.349579 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:16.614542 master-0 kubenswrapper[4202]: I0223 12:59:16.613680 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:16.614542 master-0 kubenswrapper[4202]: I0223 12:59:16.613680 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:16.615076 master-0 kubenswrapper[4202]: I0223 12:59:16.615015 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:16.615129 master-0 kubenswrapper[4202]: I0223 12:59:16.615084 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:16.615129 master-0 kubenswrapper[4202]: I0223 12:59:16.615105 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:16.615376 master-0 kubenswrapper[4202]: I0223 12:59:16.615277 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:16.615434 master-0 kubenswrapper[4202]: I0223 12:59:16.615403 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:16.615434 master-0 kubenswrapper[4202]: I0223 12:59:16.615421 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:16.633372 master-0 kubenswrapper[4202]: I0223 12:59:16.633236 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:16.641632 master-0 kubenswrapper[4202]: I0223 12:59:16.641563 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:17.264623 master-0 kubenswrapper[4202]: I0223 12:59:17.264511 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:59:17.283619 master-0 kubenswrapper[4202]: I0223 12:59:17.283574 4202 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 12:59:17.344911 master-0 kubenswrapper[4202]: I0223 12:59:17.344817 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:17.617380 master-0 kubenswrapper[4202]: I0223 12:59:17.617052 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:17.618767 master-0 kubenswrapper[4202]: I0223 12:59:17.618686 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:17.618905 master-0 kubenswrapper[4202]: I0223 12:59:17.618775 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:17.618905 master-0 kubenswrapper[4202]: I0223 12:59:17.618804 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:17.625411 master-0 kubenswrapper[4202]: I0223 12:59:17.625333 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:59:18.011842 master-0 kubenswrapper[4202]: I0223 12:59:18.011714 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:18.012148 master-0 kubenswrapper[4202]: I0223 12:59:18.011999 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:18.013769 master-0 kubenswrapper[4202]: I0223 12:59:18.013681 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:18.013769 master-0 kubenswrapper[4202]: I0223 12:59:18.013758 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:18.013984 master-0 kubenswrapper[4202]: I0223 12:59:18.013783 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:18.020450 master-0 kubenswrapper[4202]: I0223 12:59:18.020326 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:18.347930 master-0 kubenswrapper[4202]: I0223 12:59:18.347758 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:18.619251 master-0 kubenswrapper[4202]: I0223 12:59:18.619079 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:18.619251 master-0 kubenswrapper[4202]: I0223 12:59:18.619107 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620367 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620406 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620423 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620491 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620562 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:18.620620 master-0 kubenswrapper[4202]: I0223 12:59:18.620583 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:18.809110 master-0 kubenswrapper[4202]: W0223 12:59:18.808987 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 12:59:18.809110 master-0 kubenswrapper[4202]: E0223 12:59:18.809077 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:18.829332 master-0 kubenswrapper[4202]: W0223 12:59:18.829268 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 12:59:18.829578 master-0 kubenswrapper[4202]: E0223 12:59:18.829377 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:19.345737 master-0 kubenswrapper[4202]: I0223 12:59:19.345673 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:19.622593 master-0 kubenswrapper[4202]: I0223 12:59:19.622388 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:19.623804 master-0 kubenswrapper[4202]: I0223 12:59:19.623739 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:19.623804 master-0 kubenswrapper[4202]: I0223 12:59:19.623794 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:19.623804 master-0 kubenswrapper[4202]: I0223 12:59:19.623803 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:20.271273 master-0 kubenswrapper[4202]: W0223 12:59:20.271225 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:20.271515 master-0 kubenswrapper[4202]: E0223 12:59:20.271304 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:20.344442 master-0 kubenswrapper[4202]: I0223 12:59:20.344322 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:20.532371 master-0 kubenswrapper[4202]: I0223 12:59:20.532132 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:20.534266 master-0 kubenswrapper[4202]: I0223 12:59:20.534176 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:20.534266 master-0 kubenswrapper[4202]: I0223 12:59:20.534258 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:20.534654 master-0 kubenswrapper[4202]: I0223 12:59:20.534281 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:20.534920 master-0 kubenswrapper[4202]: I0223 12:59:20.534864 4202 scope.go:117] "RemoveContainer" containerID="ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a" Feb 23 12:59:20.547692 master-0 kubenswrapper[4202]: E0223 12:59:20.547457 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1980b62cb18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1980b62cb18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.559171352 +0000 UTC m=+4.848032980,LastTimestamp:2026-02-23 12:59:20.537862684 +0000 UTC m=+18.826724352,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:20.794432 master-0 kubenswrapper[4202]: W0223 12:59:20.794389 4202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 12:59:20.795053 master-0 kubenswrapper[4202]: E0223 12:59:20.794444 4202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:20.801960 master-0 kubenswrapper[4202]: E0223 12:59:20.801826 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e19818929a1b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e19818929a1b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.780408347 +0000 UTC m=+5.069269975,LastTimestamp:2026-02-23 12:59:20.795860652 +0000 UTC m=+19.084722300,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:20.814234 master-0 kubenswrapper[4202]: E0223 12:59:20.814087 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e198195c3547\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198195c3547 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:06.793620807 +0000 UTC m=+5.082482435,LastTimestamp:2026-02-23 12:59:20.808116829 +0000 UTC m=+19.096978467,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:21.343455 master-0 kubenswrapper[4202]: I0223 12:59:21.343383 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:21.554721 master-0 kubenswrapper[4202]: I0223 12:59:21.554648 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:21.554965 master-0 kubenswrapper[4202]: I0223 12:59:21.554854 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:21.556199 master-0 kubenswrapper[4202]: I0223 12:59:21.556056 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:21.556199 master-0 kubenswrapper[4202]: I0223 12:59:21.556134 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:21.556199 master-0 kubenswrapper[4202]: I0223 12:59:21.556152 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:21.630147 master-0 kubenswrapper[4202]: I0223 12:59:21.630043 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:59:21.630772 master-0 kubenswrapper[4202]: I0223 12:59:21.630700 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:59:21.631221 master-0 kubenswrapper[4202]: I0223 12:59:21.631156 4202 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b" exitCode=1 Feb 23 12:59:21.631381 master-0 kubenswrapper[4202]: I0223 12:59:21.631234 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b"} Feb 23 12:59:21.631381 master-0 kubenswrapper[4202]: I0223 12:59:21.631322 4202 scope.go:117] "RemoveContainer" containerID="ee9fab317483dbba39884f1ba91a84af81e0a3f2cc77667de7087f85d3c7b33a" Feb 23 12:59:21.631569 master-0 kubenswrapper[4202]: I0223 12:59:21.631420 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:21.632156 master-0 kubenswrapper[4202]: I0223 12:59:21.632108 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:21.632156 master-0 kubenswrapper[4202]: I0223 12:59:21.632134 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:21.632156 master-0 kubenswrapper[4202]: I0223 12:59:21.632142 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:21.632498 master-0 kubenswrapper[4202]: I0223 12:59:21.632450 4202 scope.go:117] "RemoveContainer" containerID="d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b" Feb 23 12:59:21.632666 master-0 kubenswrapper[4202]: E0223 12:59:21.632615 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:21.641167 master-0 kubenswrapper[4202]: E0223 12:59:21.640939 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e198834583f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198834583f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:08.570518513 +0000 UTC m=+6.859380131,LastTimestamp:2026-02-23 12:59:21.632592347 +0000 UTC m=+19.921453975,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:21.973778 master-0 kubenswrapper[4202]: E0223 12:59:21.973470 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:59:22.025595 master-0 kubenswrapper[4202]: I0223 12:59:22.025529 4202 csr.go:261] certificate signing request csr-f6c2c is approved, waiting to be issued Feb 23 12:59:22.240979 master-0 kubenswrapper[4202]: I0223 12:59:22.240783 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:22.242800 master-0 kubenswrapper[4202]: I0223 12:59:22.242731 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:22.242911 master-0 kubenswrapper[4202]: I0223 12:59:22.242809 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:22.242911 master-0 kubenswrapper[4202]: I0223 12:59:22.242830 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:22.243131 master-0 kubenswrapper[4202]: I0223 12:59:22.243088 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:22.252227 master-0 kubenswrapper[4202]: E0223 12:59:22.252184 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:59:22.343017 master-0 kubenswrapper[4202]: I0223 12:59:22.342916 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:22.469875 master-0 kubenswrapper[4202]: I0223 12:59:22.469726 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:22.470277 master-0 kubenswrapper[4202]: I0223 12:59:22.470036 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:22.471741 master-0 kubenswrapper[4202]: I0223 12:59:22.471675 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:22.471741 master-0 kubenswrapper[4202]: I0223 12:59:22.471727 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:22.471741 master-0 kubenswrapper[4202]: I0223 12:59:22.471739 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:22.473959 master-0 kubenswrapper[4202]: I0223 12:59:22.473908 4202 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:22.512007 master-0 kubenswrapper[4202]: E0223 12:59:22.511892 4202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:22.636941 master-0 kubenswrapper[4202]: I0223 12:59:22.636864 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:59:22.637927 master-0 kubenswrapper[4202]: I0223 12:59:22.637895 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:22.638907 master-0 kubenswrapper[4202]: I0223 12:59:22.638881 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:22.638965 master-0 kubenswrapper[4202]: I0223 12:59:22.638914 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:22.638965 master-0 kubenswrapper[4202]: I0223 12:59:22.638926 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:23.344202 master-0 kubenswrapper[4202]: I0223 12:59:23.344112 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:24.343952 master-0 kubenswrapper[4202]: I0223 12:59:24.343873 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:25.093827 master-0 kubenswrapper[4202]: I0223 12:59:25.093765 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:25.094481 master-0 kubenswrapper[4202]: I0223 12:59:25.093934 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:25.095226 master-0 kubenswrapper[4202]: I0223 12:59:25.095168 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:25.095272 master-0 kubenswrapper[4202]: I0223 12:59:25.095249 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:25.095309 master-0 kubenswrapper[4202]: I0223 12:59:25.095272 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:25.097679 master-0 kubenswrapper[4202]: I0223 12:59:25.097635 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:59:25.342611 master-0 kubenswrapper[4202]: I0223 12:59:25.342487 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:25.645623 master-0 kubenswrapper[4202]: I0223 12:59:25.645509 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:25.647213 master-0 kubenswrapper[4202]: I0223 12:59:25.647128 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:25.647393 master-0 kubenswrapper[4202]: I0223 12:59:25.647221 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:25.647393 master-0 kubenswrapper[4202]: I0223 12:59:25.647248 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:26.344592 master-0 kubenswrapper[4202]: I0223 12:59:26.344487 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:27.343896 master-0 kubenswrapper[4202]: I0223 12:59:27.343780 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:28.345514 master-0 kubenswrapper[4202]: I0223 12:59:28.345389 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:28.981738 master-0 kubenswrapper[4202]: E0223 12:59:28.981625 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:59:29.253532 master-0 kubenswrapper[4202]: I0223 12:59:29.252782 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:29.254918 master-0 kubenswrapper[4202]: I0223 12:59:29.254848 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:29.255048 master-0 kubenswrapper[4202]: I0223 12:59:29.254932 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:29.255048 master-0 kubenswrapper[4202]: I0223 12:59:29.254952 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:29.255048 master-0 kubenswrapper[4202]: I0223 12:59:29.255025 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:29.263824 master-0 kubenswrapper[4202]: E0223 12:59:29.263694 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:59:29.345190 master-0 kubenswrapper[4202]: I0223 12:59:29.345116 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:30.344025 master-0 kubenswrapper[4202]: I0223 12:59:30.343949 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:31.342721 master-0 kubenswrapper[4202]: I0223 12:59:31.342235 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:32.345229 master-0 kubenswrapper[4202]: I0223 12:59:32.345140 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:32.512135 master-0 kubenswrapper[4202]: E0223 12:59:32.512052 4202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:32.532526 master-0 kubenswrapper[4202]: I0223 12:59:32.532320 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:32.533645 master-0 kubenswrapper[4202]: I0223 12:59:32.533579 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:32.533645 master-0 kubenswrapper[4202]: I0223 12:59:32.533647 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:32.533871 master-0 kubenswrapper[4202]: I0223 12:59:32.533666 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:32.534299 master-0 kubenswrapper[4202]: I0223 12:59:32.534240 4202 scope.go:117] "RemoveContainer" containerID="d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b" Feb 23 12:59:32.534606 master-0 kubenswrapper[4202]: E0223 12:59:32.534545 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:32.543490 master-0 kubenswrapper[4202]: E0223 12:59:32.543180 4202 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e198834583f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e198834583f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:59:08.570518513 +0000 UTC m=+6.859380131,LastTimestamp:2026-02-23 12:59:32.534497189 +0000 UTC m=+30.823358857,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:59:33.343878 master-0 kubenswrapper[4202]: I0223 12:59:33.343798 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:34.342740 master-0 kubenswrapper[4202]: I0223 12:59:34.342645 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:35.344756 master-0 kubenswrapper[4202]: I0223 12:59:35.344667 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:35.991166 master-0 kubenswrapper[4202]: E0223 12:59:35.991071 4202 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:59:36.264017 master-0 kubenswrapper[4202]: I0223 12:59:36.263922 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:36.265828 master-0 kubenswrapper[4202]: I0223 12:59:36.265765 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:36.265828 master-0 kubenswrapper[4202]: I0223 12:59:36.265825 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:36.266064 master-0 kubenswrapper[4202]: I0223 12:59:36.265843 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:36.266064 master-0 kubenswrapper[4202]: I0223 12:59:36.265928 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:36.274699 master-0 kubenswrapper[4202]: E0223 12:59:36.274641 4202 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:59:36.344692 master-0 kubenswrapper[4202]: I0223 12:59:36.344588 4202 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:36.745440 master-0 kubenswrapper[4202]: I0223 12:59:36.745201 4202 csr.go:257] certificate signing request csr-f6c2c is issued Feb 23 12:59:37.207698 master-0 kubenswrapper[4202]: I0223 12:59:37.207614 4202 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 12:59:37.356511 master-0 kubenswrapper[4202]: I0223 12:59:37.356430 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.375579 master-0 kubenswrapper[4202]: I0223 12:59:37.375495 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.432907 master-0 kubenswrapper[4202]: I0223 12:59:37.432854 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.503120 master-0 kubenswrapper[4202]: I0223 12:59:37.503000 4202 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:37.690779 master-0 kubenswrapper[4202]: I0223 12:59:37.690717 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.690779 master-0 kubenswrapper[4202]: E0223 12:59:37.690774 4202 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 23 12:59:37.713715 master-0 kubenswrapper[4202]: I0223 12:59:37.713641 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.730035 master-0 kubenswrapper[4202]: I0223 12:59:37.729980 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:37.746653 master-0 kubenswrapper[4202]: I0223 12:59:37.746597 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 08:11:47.651464596 +0000 UTC Feb 23 12:59:37.746653 master-0 kubenswrapper[4202]: I0223 12:59:37.746643 4202 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h12m9.904827324s for next certificate rotation Feb 23 12:59:37.794767 master-0 kubenswrapper[4202]: I0223 12:59:37.794707 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.071287 master-0 kubenswrapper[4202]: I0223 12:59:38.071093 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.071287 master-0 kubenswrapper[4202]: E0223 12:59:38.071166 4202 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 23 12:59:38.178769 master-0 kubenswrapper[4202]: I0223 12:59:38.178659 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.201351 master-0 kubenswrapper[4202]: I0223 12:59:38.201279 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.256040 master-0 kubenswrapper[4202]: I0223 12:59:38.255987 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.527804 master-0 kubenswrapper[4202]: I0223 12:59:38.527753 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:38.527804 master-0 kubenswrapper[4202]: E0223 12:59:38.527789 4202 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 23 12:59:39.076521 master-0 kubenswrapper[4202]: I0223 12:59:39.076316 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:39.089859 master-0 kubenswrapper[4202]: I0223 12:59:39.089739 4202 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:39.092502 master-0 kubenswrapper[4202]: I0223 12:59:39.092449 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:39.151065 master-0 kubenswrapper[4202]: I0223 12:59:39.150950 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:39.261572 master-0 kubenswrapper[4202]: I0223 12:59:39.261446 4202 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:39.343489 master-0 kubenswrapper[4202]: I0223 12:59:39.343291 4202 apiserver.go:52] "Watching apiserver" Feb 23 12:59:39.347225 master-0 kubenswrapper[4202]: I0223 12:59:39.347155 4202 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 12:59:39.347405 master-0 kubenswrapper[4202]: I0223 12:59:39.347331 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Feb 23 12:59:39.427202 master-0 kubenswrapper[4202]: I0223 12:59:39.427113 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:39.427202 master-0 kubenswrapper[4202]: E0223 12:59:39.427163 4202 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 23 12:59:39.447561 master-0 kubenswrapper[4202]: I0223 12:59:39.447490 4202 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 12:59:41.275011 master-0 kubenswrapper[4202]: I0223 12:59:41.274783 4202 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:42.512411 master-0 kubenswrapper[4202]: E0223 12:59:42.512273 4202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:42.977009 master-0 kubenswrapper[4202]: I0223 12:59:42.976825 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:42.994000 master-0 kubenswrapper[4202]: I0223 12:59:42.993944 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:42.997164 master-0 kubenswrapper[4202]: E0223 12:59:42.997088 4202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 23 12:59:43.053765 master-0 kubenswrapper[4202]: I0223 12:59:43.053692 4202 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:43.275923 master-0 kubenswrapper[4202]: I0223 12:59:43.275847 4202 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:43.278130 master-0 kubenswrapper[4202]: I0223 12:59:43.278074 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:43.278254 master-0 kubenswrapper[4202]: I0223 12:59:43.278148 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:43.278254 master-0 kubenswrapper[4202]: I0223 12:59:43.278167 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:43.278379 master-0 kubenswrapper[4202]: I0223 12:59:43.278302 4202 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:43.290766 master-0 kubenswrapper[4202]: I0223 12:59:43.290675 4202 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 12:59:43.372144 master-0 kubenswrapper[4202]: I0223 12:59:43.372062 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 12:59:43.395625 master-0 kubenswrapper[4202]: I0223 12:59:43.395561 4202 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 12:59:43.725783 master-0 kubenswrapper[4202]: I0223 12:59:43.725558 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz"] Feb 23 12:59:43.726862 master-0 kubenswrapper[4202]: I0223 12:59:43.725889 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.729210 master-0 kubenswrapper[4202]: I0223 12:59:43.729126 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 12:59:43.729210 master-0 kubenswrapper[4202]: I0223 12:59:43.729204 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 12:59:43.730623 master-0 kubenswrapper[4202]: I0223 12:59:43.730559 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 12:59:43.817801 master-0 kubenswrapper[4202]: I0223 12:59:43.817722 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.818064 master-0 kubenswrapper[4202]: I0223 12:59:43.817810 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.818064 master-0 kubenswrapper[4202]: I0223 12:59:43.817863 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.818064 master-0 kubenswrapper[4202]: I0223 12:59:43.817903 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.818064 master-0 kubenswrapper[4202]: I0223 12:59:43.817946 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919313 master-0 kubenswrapper[4202]: I0223 12:59:43.919219 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919313 master-0 kubenswrapper[4202]: I0223 12:59:43.919308 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919778 master-0 kubenswrapper[4202]: I0223 12:59:43.919579 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919778 master-0 kubenswrapper[4202]: I0223 12:59:43.919720 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919845 master-0 kubenswrapper[4202]: I0223 12:59:43.919792 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919928 master-0 kubenswrapper[4202]: I0223 12:59:43.919895 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919969 master-0 kubenswrapper[4202]: I0223 12:59:43.919896 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.919969 master-0 kubenswrapper[4202]: E0223 12:59:43.919958 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:43.920079 master-0 kubenswrapper[4202]: E0223 12:59:43.920059 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:44.420030927 +0000 UTC m=+42.708892565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:43.920963 master-0 kubenswrapper[4202]: I0223 12:59:43.920909 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:43.951045 master-0 kubenswrapper[4202]: I0223 12:59:43.950958 4202 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 12:59:43.958630 master-0 kubenswrapper[4202]: I0223 12:59:43.958526 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:44.424985 master-0 kubenswrapper[4202]: I0223 12:59:44.424862 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:44.425417 master-0 kubenswrapper[4202]: E0223 12:59:44.425116 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:44.425417 master-0 kubenswrapper[4202]: E0223 12:59:44.425314 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:45.425282835 +0000 UTC m=+43.714144503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:44.552004 master-0 kubenswrapper[4202]: I0223 12:59:44.551947 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 23 12:59:44.552731 master-0 kubenswrapper[4202]: I0223 12:59:44.552654 4202 scope.go:117] "RemoveContainer" containerID="d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b" Feb 23 12:59:44.820670 master-0 kubenswrapper[4202]: I0223 12:59:44.820575 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7d7db75979-q7q5x"] Feb 23 12:59:44.821747 master-0 kubenswrapper[4202]: I0223 12:59:44.820954 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:44.823983 master-0 kubenswrapper[4202]: I0223 12:59:44.823918 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 12:59:44.825437 master-0 kubenswrapper[4202]: I0223 12:59:44.825137 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 12:59:44.825437 master-0 kubenswrapper[4202]: I0223 12:59:44.825163 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 12:59:44.930023 master-0 kubenswrapper[4202]: I0223 12:59:44.929862 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:44.930023 master-0 kubenswrapper[4202]: I0223 12:59:44.929965 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:44.930023 master-0 kubenswrapper[4202]: I0223 12:59:44.929998 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.030532 master-0 kubenswrapper[4202]: I0223 12:59:45.030366 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.030857 master-0 kubenswrapper[4202]: I0223 12:59:45.030772 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.031018 master-0 kubenswrapper[4202]: I0223 12:59:45.030961 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.031093 master-0 kubenswrapper[4202]: I0223 12:59:45.031054 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.039826 master-0 kubenswrapper[4202]: I0223 12:59:45.039687 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.060184 master-0 kubenswrapper[4202]: I0223 12:59:45.060110 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.159968 master-0 kubenswrapper[4202]: I0223 12:59:45.159868 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 12:59:45.178740 master-0 kubenswrapper[4202]: W0223 12:59:45.178675 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7c1ea0_e3c1_4494_bb27_058200b93ed7.slice/crio-7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8 WatchSource:0}: Error finding container 7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8: Status 404 returned error can't find the container with id 7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8 Feb 23 12:59:45.434877 master-0 kubenswrapper[4202]: I0223 12:59:45.434681 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:45.435188 master-0 kubenswrapper[4202]: E0223 12:59:45.434898 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:45.435188 master-0 kubenswrapper[4202]: E0223 12:59:45.434996 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:47.4349662 +0000 UTC m=+45.723827858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:45.708252 master-0 kubenswrapper[4202]: I0223 12:59:45.708044 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" event={"ID":"0d7c1ea0-e3c1-4494-bb27-058200b93ed7","Type":"ContainerStarted","Data":"7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8"} Feb 23 12:59:45.710685 master-0 kubenswrapper[4202]: I0223 12:59:45.710640 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/3.log" Feb 23 12:59:45.712139 master-0 kubenswrapper[4202]: I0223 12:59:45.712061 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:59:45.713000 master-0 kubenswrapper[4202]: I0223 12:59:45.712941 4202 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" exitCode=1 Feb 23 12:59:45.713115 master-0 kubenswrapper[4202]: I0223 12:59:45.713012 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736"} Feb 23 12:59:45.713115 master-0 kubenswrapper[4202]: I0223 12:59:45.713085 4202 scope.go:117] "RemoveContainer" containerID="d54510cfaf3a8db44b2b91fdd016f016ce44a9717586634189edc3fd0ee04e3b" Feb 23 12:59:45.713770 master-0 kubenswrapper[4202]: I0223 12:59:45.713702 4202 scope.go:117] "RemoveContainer" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" Feb 23 12:59:45.714082 master-0 kubenswrapper[4202]: E0223 12:59:45.714017 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:46.718851 master-0 kubenswrapper[4202]: I0223 12:59:46.718788 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/3.log" Feb 23 12:59:46.720177 master-0 kubenswrapper[4202]: I0223 12:59:46.720122 4202 scope.go:117] "RemoveContainer" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" Feb 23 12:59:46.722663 master-0 kubenswrapper[4202]: E0223 12:59:46.721749 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:47.449583 master-0 kubenswrapper[4202]: I0223 12:59:47.449497 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:47.450208 master-0 kubenswrapper[4202]: E0223 12:59:47.449788 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:47.450208 master-0 kubenswrapper[4202]: E0223 12:59:47.450128 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:51.450090733 +0000 UTC m=+49.738952401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:48.563287 master-0 kubenswrapper[4202]: I0223 12:59:48.562494 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-nktl9"] Feb 23 12:59:48.563287 master-0 kubenswrapper[4202]: I0223 12:59:48.562881 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.565570 master-0 kubenswrapper[4202]: I0223 12:59:48.565142 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 23 12:59:48.566834 master-0 kubenswrapper[4202]: I0223 12:59:48.566028 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 23 12:59:48.566834 master-0 kubenswrapper[4202]: I0223 12:59:48.566417 4202 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 23 12:59:48.566834 master-0 kubenswrapper[4202]: I0223 12:59:48.566739 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 23 12:59:48.658312 master-0 kubenswrapper[4202]: I0223 12:59:48.658193 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.658312 master-0 kubenswrapper[4202]: I0223 12:59:48.658253 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.658772 master-0 kubenswrapper[4202]: I0223 12:59:48.658444 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.658772 master-0 kubenswrapper[4202]: I0223 12:59:48.658571 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt57w\" (UniqueName: \"kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.658772 master-0 kubenswrapper[4202]: I0223 12:59:48.658624 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.759891 master-0 kubenswrapper[4202]: I0223 12:59:48.759768 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760216 master-0 kubenswrapper[4202]: I0223 12:59:48.759995 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760216 master-0 kubenswrapper[4202]: I0223 12:59:48.760051 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt57w\" (UniqueName: \"kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760216 master-0 kubenswrapper[4202]: I0223 12:59:48.760156 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760216 master-0 kubenswrapper[4202]: I0223 12:59:48.760219 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760586 master-0 kubenswrapper[4202]: I0223 12:59:48.760226 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760586 master-0 kubenswrapper[4202]: I0223 12:59:48.760262 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760586 master-0 kubenswrapper[4202]: I0223 12:59:48.760303 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.760586 master-0 kubenswrapper[4202]: I0223 12:59:48.760334 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.778772 master-0 kubenswrapper[4202]: I0223 12:59:48.778684 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt57w\" (UniqueName: \"kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w\") pod \"assisted-installer-controller-nktl9\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.899086 master-0 kubenswrapper[4202]: I0223 12:59:48.898936 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:48.911691 master-0 kubenswrapper[4202]: W0223 12:59:48.911180 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0063130_dfb5_4907_a000_f023a77c6441.slice/crio-451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510 WatchSource:0}: Error finding container 451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510: Status 404 returned error can't find the container with id 451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510 Feb 23 12:59:49.729196 master-0 kubenswrapper[4202]: I0223 12:59:49.729122 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" event={"ID":"0d7c1ea0-e3c1-4494-bb27-058200b93ed7","Type":"ContainerStarted","Data":"eb968c3314cb31b6e0492300e6336271f0112ff545f49715e98a1fe86c9c31d2"} Feb 23 12:59:49.731447 master-0 kubenswrapper[4202]: I0223 12:59:49.731396 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-nktl9" event={"ID":"e0063130-dfb5-4907-a000-f023a77c6441","Type":"ContainerStarted","Data":"451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510"} Feb 23 12:59:49.748605 master-0 kubenswrapper[4202]: I0223 12:59:49.748471 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" podStartSLOduration=2.007113373 podStartE2EDuration="5.748431997s" podCreationTimestamp="2026-02-23 12:59:44 +0000 UTC" firstStartedPulling="2026-02-23 12:59:45.181784122 +0000 UTC m=+43.470645780" lastFinishedPulling="2026-02-23 12:59:48.923102776 +0000 UTC m=+47.211964404" observedRunningTime="2026-02-23 12:59:49.747727949 +0000 UTC m=+48.036589617" watchObservedRunningTime="2026-02-23 12:59:49.748431997 +0000 UTC m=+48.037293625" Feb 23 12:59:51.484928 master-0 kubenswrapper[4202]: I0223 12:59:51.484788 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:51.485944 master-0 kubenswrapper[4202]: E0223 12:59:51.485255 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:51.485944 master-0 kubenswrapper[4202]: E0223 12:59:51.485542 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:59.48549751 +0000 UTC m=+57.774359178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:51.613655 master-0 kubenswrapper[4202]: I0223 12:59:51.613573 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-2xhg2"] Feb 23 12:59:51.613921 master-0 kubenswrapper[4202]: I0223 12:59:51.613871 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:51.686720 master-0 kubenswrapper[4202]: I0223 12:59:51.686632 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6zt\" (UniqueName: \"kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt\") pod \"mtu-prober-2xhg2\" (UID: \"2cc34173-350b-40a9-a164-e500e96caf74\") " pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:51.787930 master-0 kubenswrapper[4202]: I0223 12:59:51.787867 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6zt\" (UniqueName: \"kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt\") pod \"mtu-prober-2xhg2\" (UID: \"2cc34173-350b-40a9-a164-e500e96caf74\") " pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:51.820300 master-0 kubenswrapper[4202]: I0223 12:59:51.820217 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6zt\" (UniqueName: \"kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt\") pod \"mtu-prober-2xhg2\" (UID: \"2cc34173-350b-40a9-a164-e500e96caf74\") " pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:51.931885 master-0 kubenswrapper[4202]: I0223 12:59:51.931818 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:51.950077 master-0 kubenswrapper[4202]: W0223 12:59:51.950023 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc34173_350b_40a9_a164_e500e96caf74.slice/crio-f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2 WatchSource:0}: Error finding container f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2: Status 404 returned error can't find the container with id f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2 Feb 23 12:59:52.743152 master-0 kubenswrapper[4202]: I0223 12:59:52.743069 4202 generic.go:334] "Generic (PLEG): container finished" podID="2cc34173-350b-40a9-a164-e500e96caf74" containerID="1ac5db7a64f2f6a417b7aa444094f3b04d08a91a07d4cc6037194f4d5f089c43" exitCode=0 Feb 23 12:59:52.743152 master-0 kubenswrapper[4202]: I0223 12:59:52.743143 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-2xhg2" event={"ID":"2cc34173-350b-40a9-a164-e500e96caf74","Type":"ContainerDied","Data":"1ac5db7a64f2f6a417b7aa444094f3b04d08a91a07d4cc6037194f4d5f089c43"} Feb 23 12:59:52.743999 master-0 kubenswrapper[4202]: I0223 12:59:52.743209 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-2xhg2" event={"ID":"2cc34173-350b-40a9-a164-e500e96caf74","Type":"ContainerStarted","Data":"f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2"} Feb 23 12:59:53.377581 master-0 kubenswrapper[4202]: I0223 12:59:53.377398 4202 csr.go:261] certificate signing request csr-x8smn is approved, waiting to be issued Feb 23 12:59:53.386826 master-0 kubenswrapper[4202]: I0223 12:59:53.386693 4202 csr.go:257] certificate signing request csr-x8smn is issued Feb 23 12:59:54.026474 master-0 kubenswrapper[4202]: I0223 12:59:54.026421 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:54.105553 master-0 kubenswrapper[4202]: I0223 12:59:54.105470 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6zt\" (UniqueName: \"kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt\") pod \"2cc34173-350b-40a9-a164-e500e96caf74\" (UID: \"2cc34173-350b-40a9-a164-e500e96caf74\") " Feb 23 12:59:54.111667 master-0 kubenswrapper[4202]: I0223 12:59:54.111608 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt" (OuterVolumeSpecName: "kube-api-access-9r6zt") pod "2cc34173-350b-40a9-a164-e500e96caf74" (UID: "2cc34173-350b-40a9-a164-e500e96caf74"). InnerVolumeSpecName "kube-api-access-9r6zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 12:59:54.206817 master-0 kubenswrapper[4202]: I0223 12:59:54.206699 4202 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6zt\" (UniqueName: \"kubernetes.io/projected/2cc34173-350b-40a9-a164-e500e96caf74-kube-api-access-9r6zt\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:54.389184 master-0 kubenswrapper[4202]: I0223 12:59:54.389060 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 06:25:42.135633548 +0000 UTC Feb 23 12:59:54.389184 master-0 kubenswrapper[4202]: I0223 12:59:54.389118 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h25m47.746520004s for next certificate rotation Feb 23 12:59:54.751689 master-0 kubenswrapper[4202]: I0223 12:59:54.751623 4202 generic.go:334] "Generic (PLEG): container finished" podID="e0063130-dfb5-4907-a000-f023a77c6441" containerID="b055012e88ad61c2c4ff44365b26ade24e930d1fe63f02496d6b67176e6fe113" exitCode=0 Feb 23 12:59:54.751986 master-0 kubenswrapper[4202]: I0223 12:59:54.751729 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-nktl9" event={"ID":"e0063130-dfb5-4907-a000-f023a77c6441","Type":"ContainerDied","Data":"b055012e88ad61c2c4ff44365b26ade24e930d1fe63f02496d6b67176e6fe113"} Feb 23 12:59:54.753606 master-0 kubenswrapper[4202]: I0223 12:59:54.753430 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-2xhg2" event={"ID":"2cc34173-350b-40a9-a164-e500e96caf74","Type":"ContainerDied","Data":"f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2"} Feb 23 12:59:54.753606 master-0 kubenswrapper[4202]: I0223 12:59:54.753514 4202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2" Feb 23 12:59:54.753606 master-0 kubenswrapper[4202]: I0223 12:59:54.753537 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-2xhg2" Feb 23 12:59:55.389736 master-0 kubenswrapper[4202]: I0223 12:59:55.389577 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 07:58:27.095250276 +0000 UTC Feb 23 12:59:55.389736 master-0 kubenswrapper[4202]: I0223 12:59:55.389633 4202 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h58m31.705622716s for next certificate rotation Feb 23 12:59:55.782533 master-0 kubenswrapper[4202]: I0223 12:59:55.782487 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:55.919847 master-0 kubenswrapper[4202]: I0223 12:59:55.919786 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf\") pod \"e0063130-dfb5-4907-a000-f023a77c6441\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " Feb 23 12:59:55.920271 master-0 kubenswrapper[4202]: I0223 12:59:55.920237 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files\") pod \"e0063130-dfb5-4907-a000-f023a77c6441\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " Feb 23 12:59:55.920508 master-0 kubenswrapper[4202]: I0223 12:59:55.920482 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf\") pod \"e0063130-dfb5-4907-a000-f023a77c6441\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " Feb 23 12:59:55.920671 master-0 kubenswrapper[4202]: I0223 12:59:55.920647 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle\") pod \"e0063130-dfb5-4907-a000-f023a77c6441\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " Feb 23 12:59:55.920911 master-0 kubenswrapper[4202]: I0223 12:59:55.920002 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "e0063130-dfb5-4907-a000-f023a77c6441" (UID: "e0063130-dfb5-4907-a000-f023a77c6441"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:55.921012 master-0 kubenswrapper[4202]: I0223 12:59:55.920328 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "e0063130-dfb5-4907-a000-f023a77c6441" (UID: "e0063130-dfb5-4907-a000-f023a77c6441"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:55.921012 master-0 kubenswrapper[4202]: I0223 12:59:55.920562 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "e0063130-dfb5-4907-a000-f023a77c6441" (UID: "e0063130-dfb5-4907-a000-f023a77c6441"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:55.921012 master-0 kubenswrapper[4202]: I0223 12:59:55.920732 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "e0063130-dfb5-4907-a000-f023a77c6441" (UID: "e0063130-dfb5-4907-a000-f023a77c6441"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:55.921260 master-0 kubenswrapper[4202]: I0223 12:59:55.920892 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt57w\" (UniqueName: \"kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w\") pod \"e0063130-dfb5-4907-a000-f023a77c6441\" (UID: \"e0063130-dfb5-4907-a000-f023a77c6441\") " Feb 23 12:59:55.921521 master-0 kubenswrapper[4202]: I0223 12:59:55.921493 4202 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:55.921720 master-0 kubenswrapper[4202]: I0223 12:59:55.921641 4202 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:55.922175 master-0 kubenswrapper[4202]: I0223 12:59:55.922106 4202 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:55.922383 master-0 kubenswrapper[4202]: I0223 12:59:55.922329 4202 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/e0063130-dfb5-4907-a000-f023a77c6441-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:55.937774 master-0 kubenswrapper[4202]: I0223 12:59:55.937721 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w" (OuterVolumeSpecName: "kube-api-access-wt57w") pod "e0063130-dfb5-4907-a000-f023a77c6441" (UID: "e0063130-dfb5-4907-a000-f023a77c6441"). InnerVolumeSpecName "kube-api-access-wt57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 12:59:56.023394 master-0 kubenswrapper[4202]: I0223 12:59:56.023143 4202 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt57w\" (UniqueName: \"kubernetes.io/projected/e0063130-dfb5-4907-a000-f023a77c6441-kube-api-access-wt57w\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:56.606394 master-0 kubenswrapper[4202]: I0223 12:59:56.606321 4202 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-2xhg2"] Feb 23 12:59:56.610923 master-0 kubenswrapper[4202]: I0223 12:59:56.610879 4202 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-2xhg2"] Feb 23 12:59:56.760666 master-0 kubenswrapper[4202]: I0223 12:59:56.760576 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-nktl9" event={"ID":"e0063130-dfb5-4907-a000-f023a77c6441","Type":"ContainerDied","Data":"451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510"} Feb 23 12:59:56.760666 master-0 kubenswrapper[4202]: I0223 12:59:56.760647 4202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510" Feb 23 12:59:56.761017 master-0 kubenswrapper[4202]: I0223 12:59:56.760702 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 12:59:58.539127 master-0 kubenswrapper[4202]: I0223 12:59:58.539016 4202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc34173-350b-40a9-a164-e500e96caf74" path="/var/lib/kubelet/pods/2cc34173-350b-40a9-a164-e500e96caf74/volumes" Feb 23 12:59:59.532951 master-0 kubenswrapper[4202]: I0223 12:59:59.532839 4202 scope.go:117] "RemoveContainer" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" Feb 23 12:59:59.533257 master-0 kubenswrapper[4202]: E0223 12:59:59.533103 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:59.552431 master-0 kubenswrapper[4202]: I0223 12:59:59.552226 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 12:59:59.553480 master-0 kubenswrapper[4202]: E0223 12:59:59.552570 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:59.553480 master-0 kubenswrapper[4202]: E0223 12:59:59.552706 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:15.552666971 +0000 UTC m=+73.841528649 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: I0223 13:00:01.495572 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6lk7x"] Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: E0223 13:00:01.495729 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: I0223 13:00:01.495806 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: E0223 13:00:01.495827 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: I0223 13:00:01.495844 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:00:01.495892 master-0 kubenswrapper[4202]: I0223 13:00:01.495899 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:00:01.497068 master-0 kubenswrapper[4202]: I0223 13:00:01.495920 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:00:01.497068 master-0 kubenswrapper[4202]: I0223 13:00:01.496265 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.499158 master-0 kubenswrapper[4202]: I0223 13:00:01.499091 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:00:01.499319 master-0 kubenswrapper[4202]: I0223 13:00:01.499242 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:00:01.499430 master-0 kubenswrapper[4202]: I0223 13:00:01.499260 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:00:01.500788 master-0 kubenswrapper[4202]: I0223 13:00:01.500694 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:00:01.567105 master-0 kubenswrapper[4202]: I0223 13:00:01.567023 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567105 master-0 kubenswrapper[4202]: I0223 13:00:01.567083 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567105 master-0 kubenswrapper[4202]: I0223 13:00:01.567110 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567222 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567287 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567322 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567368 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567402 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567424 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567521 master-0 kubenswrapper[4202]: I0223 13:00:01.567447 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567523 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567583 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567654 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567682 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567708 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567742 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.567886 master-0 kubenswrapper[4202]: I0223 13:00:01.567765 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669052 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669133 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669162 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669188 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669213 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669241 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669268 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669263 master-0 kubenswrapper[4202]: I0223 13:00:01.669297 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669332 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669385 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669413 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669465 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669487 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669510 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669533 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669555 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669613 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669724 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669915 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669952 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.669957 master-0 kubenswrapper[4202]: I0223 13:00:01.669984 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671172 master-0 kubenswrapper[4202]: I0223 13:00:01.670891 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671172 master-0 kubenswrapper[4202]: I0223 13:00:01.671019 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671172 master-0 kubenswrapper[4202]: I0223 13:00:01.671064 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671314 master-0 kubenswrapper[4202]: I0223 13:00:01.671188 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671426 master-0 kubenswrapper[4202]: I0223 13:00:01.671391 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671514 master-0 kubenswrapper[4202]: I0223 13:00:01.671440 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671514 master-0 kubenswrapper[4202]: I0223 13:00:01.671451 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671514 master-0 kubenswrapper[4202]: I0223 13:00:01.671500 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671616 master-0 kubenswrapper[4202]: I0223 13:00:01.671539 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671616 master-0 kubenswrapper[4202]: I0223 13:00:01.671578 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671616 master-0 kubenswrapper[4202]: I0223 13:00:01.671513 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.671721 master-0 kubenswrapper[4202]: I0223 13:00:01.671657 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.690864 master-0 kubenswrapper[4202]: I0223 13:00:01.690805 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-srlm4"] Feb 23 13:00:01.691454 master-0 kubenswrapper[4202]: I0223 13:00:01.691430 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.691918 master-0 kubenswrapper[4202]: I0223 13:00:01.691876 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.694096 master-0 kubenswrapper[4202]: I0223 13:00:01.694065 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:00:01.694333 master-0 kubenswrapper[4202]: I0223 13:00:01.694301 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:00:01.770482 master-0 kubenswrapper[4202]: I0223 13:00:01.770399 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770718 master-0 kubenswrapper[4202]: I0223 13:00:01.770502 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770718 master-0 kubenswrapper[4202]: I0223 13:00:01.770548 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770718 master-0 kubenswrapper[4202]: I0223 13:00:01.770638 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770718 master-0 kubenswrapper[4202]: I0223 13:00:01.770701 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770841 master-0 kubenswrapper[4202]: I0223 13:00:01.770740 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770841 master-0 kubenswrapper[4202]: I0223 13:00:01.770801 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.770841 master-0 kubenswrapper[4202]: I0223 13:00:01.770833 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.813575 master-0 kubenswrapper[4202]: I0223 13:00:01.813485 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6lk7x" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.876833 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877276 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877336 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877428 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877470 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877780 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877814 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.877890 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.878103 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.878208 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.878436 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.878502 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.879130 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.879266 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.880654 master-0 kubenswrapper[4202]: I0223 13:00:01.880104 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:01.924822 master-0 kubenswrapper[4202]: I0223 13:00:01.924726 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:02.016113 master-0 kubenswrapper[4202]: I0223 13:00:02.016014 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:02.033065 master-0 kubenswrapper[4202]: W0223 13:00:02.032977 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99f14e64_228f_4b9e_991f_ee398fe7bb8a.slice/crio-da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917 WatchSource:0}: Error finding container da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917: Status 404 returned error can't find the container with id da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917 Feb 23 13:00:02.490852 master-0 kubenswrapper[4202]: I0223 13:00:02.485308 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bbrcr"] Feb 23 13:00:02.490852 master-0 kubenswrapper[4202]: I0223 13:00:02.485688 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.490852 master-0 kubenswrapper[4202]: E0223 13:00:02.485747 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:02.584122 master-0 kubenswrapper[4202]: I0223 13:00:02.584043 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.584648 master-0 kubenswrapper[4202]: I0223 13:00:02.584135 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.685813 master-0 kubenswrapper[4202]: I0223 13:00:02.685671 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.685813 master-0 kubenswrapper[4202]: I0223 13:00:02.685767 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.686252 master-0 kubenswrapper[4202]: E0223 13:00:02.686010 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:02.686252 master-0 kubenswrapper[4202]: E0223 13:00:02.686128 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:03.186103649 +0000 UTC m=+61.474965287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:02.725416 master-0 kubenswrapper[4202]: I0223 13:00:02.725318 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:02.778269 master-0 kubenswrapper[4202]: I0223 13:00:02.778176 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lk7x" event={"ID":"d48d286d-4f37-4027-86cd-1580e6076613","Type":"ContainerStarted","Data":"4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e"} Feb 23 13:00:02.779901 master-0 kubenswrapper[4202]: I0223 13:00:02.779854 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerStarted","Data":"da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917"} Feb 23 13:00:03.191548 master-0 kubenswrapper[4202]: I0223 13:00:03.191362 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:03.191800 master-0 kubenswrapper[4202]: E0223 13:00:03.191655 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:03.191800 master-0 kubenswrapper[4202]: E0223 13:00:03.191776 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:04.191745166 +0000 UTC m=+62.480606804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:04.201270 master-0 kubenswrapper[4202]: I0223 13:00:04.200817 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:04.201270 master-0 kubenswrapper[4202]: E0223 13:00:04.201059 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:04.202144 master-0 kubenswrapper[4202]: E0223 13:00:04.201395 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:06.201366049 +0000 UTC m=+64.490227747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:04.532728 master-0 kubenswrapper[4202]: I0223 13:00:04.532658 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:04.532934 master-0 kubenswrapper[4202]: E0223 13:00:04.532856 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:04.787032 master-0 kubenswrapper[4202]: I0223 13:00:04.786982 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="c0cd8e6831fa2b7f0a83e05208d92bf5646225429df385d54f2e069a34fbf956" exitCode=0 Feb 23 13:00:04.787032 master-0 kubenswrapper[4202]: I0223 13:00:04.787034 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"c0cd8e6831fa2b7f0a83e05208d92bf5646225429df385d54f2e069a34fbf956"} Feb 23 13:00:06.216965 master-0 kubenswrapper[4202]: I0223 13:00:06.216839 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:06.218283 master-0 kubenswrapper[4202]: E0223 13:00:06.217027 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:06.218283 master-0 kubenswrapper[4202]: E0223 13:00:06.217106 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:10.217082798 +0000 UTC m=+68.505944426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:06.532785 master-0 kubenswrapper[4202]: I0223 13:00:06.532670 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:06.533071 master-0 kubenswrapper[4202]: E0223 13:00:06.532891 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:08.532681 master-0 kubenswrapper[4202]: I0223 13:00:08.532573 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:08.533296 master-0 kubenswrapper[4202]: E0223 13:00:08.532827 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:10.251104 master-0 kubenswrapper[4202]: I0223 13:00:10.250981 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:10.252627 master-0 kubenswrapper[4202]: E0223 13:00:10.251203 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:10.252627 master-0 kubenswrapper[4202]: E0223 13:00:10.251310 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:18.251276974 +0000 UTC m=+76.540138602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:10.534678 master-0 kubenswrapper[4202]: I0223 13:00:10.534454 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:10.535418 master-0 kubenswrapper[4202]: E0223 13:00:10.534974 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:12.532939 master-0 kubenswrapper[4202]: I0223 13:00:12.532869 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:12.533652 master-0 kubenswrapper[4202]: E0223 13:00:12.533577 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:13.812738 master-0 kubenswrapper[4202]: I0223 13:00:13.812547 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="de633df615dfa1f75b5da48e16feb9f5558220428b4dd98a89433d879af25256" exitCode=0 Feb 23 13:00:13.812738 master-0 kubenswrapper[4202]: I0223 13:00:13.812676 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"de633df615dfa1f75b5da48e16feb9f5558220428b4dd98a89433d879af25256"} Feb 23 13:00:13.816031 master-0 kubenswrapper[4202]: I0223 13:00:13.815202 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lk7x" event={"ID":"d48d286d-4f37-4027-86cd-1580e6076613","Type":"ContainerStarted","Data":"22d3605ddbeb329af6824a8677cc74607ae13dd6e5d60d032112226d722460cf"} Feb 23 13:00:13.871247 master-0 kubenswrapper[4202]: I0223 13:00:13.863297 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6lk7x" podStartSLOduration=1.3182378909999999 podStartE2EDuration="12.863268581s" podCreationTimestamp="2026-02-23 13:00:01 +0000 UTC" firstStartedPulling="2026-02-23 13:00:01.831807214 +0000 UTC m=+60.120668882" lastFinishedPulling="2026-02-23 13:00:13.376837934 +0000 UTC m=+71.665699572" observedRunningTime="2026-02-23 13:00:13.862995794 +0000 UTC m=+72.151857423" watchObservedRunningTime="2026-02-23 13:00:13.863268581 +0000 UTC m=+72.152130219" Feb 23 13:00:13.898064 master-0 kubenswrapper[4202]: I0223 13:00:13.897991 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz"] Feb 23 13:00:13.898501 master-0 kubenswrapper[4202]: I0223 13:00:13.898476 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:13.903504 master-0 kubenswrapper[4202]: I0223 13:00:13.901522 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:00:13.903504 master-0 kubenswrapper[4202]: I0223 13:00:13.901660 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:00:13.903504 master-0 kubenswrapper[4202]: I0223 13:00:13.902189 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:00:13.903504 master-0 kubenswrapper[4202]: I0223 13:00:13.902474 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:00:13.903504 master-0 kubenswrapper[4202]: I0223 13:00:13.902672 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:00:14.086675 master-0 kubenswrapper[4202]: I0223 13:00:14.085818 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.086675 master-0 kubenswrapper[4202]: I0223 13:00:14.086098 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.086675 master-0 kubenswrapper[4202]: I0223 13:00:14.086172 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.086675 master-0 kubenswrapper[4202]: I0223 13:00:14.086376 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.087211 master-0 kubenswrapper[4202]: I0223 13:00:14.087158 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h75w9"] Feb 23 13:00:14.088061 master-0 kubenswrapper[4202]: I0223 13:00:14.088035 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.090857 master-0 kubenswrapper[4202]: I0223 13:00:14.090713 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:00:14.092459 master-0 kubenswrapper[4202]: I0223 13:00:14.092401 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:00:14.187272 master-0 kubenswrapper[4202]: I0223 13:00:14.187187 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187272 master-0 kubenswrapper[4202]: I0223 13:00:14.187242 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhhw\" (UniqueName: \"kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187272 master-0 kubenswrapper[4202]: I0223 13:00:14.187267 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187569 master-0 kubenswrapper[4202]: I0223 13:00:14.187303 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.187569 master-0 kubenswrapper[4202]: I0223 13:00:14.187329 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187569 master-0 kubenswrapper[4202]: I0223 13:00:14.187382 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.187653 master-0 kubenswrapper[4202]: I0223 13:00:14.187566 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187711 master-0 kubenswrapper[4202]: I0223 13:00:14.187676 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187749 master-0 kubenswrapper[4202]: I0223 13:00:14.187721 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187886 master-0 kubenswrapper[4202]: I0223 13:00:14.187840 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187919 master-0 kubenswrapper[4202]: I0223 13:00:14.187897 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.187965 master-0 kubenswrapper[4202]: I0223 13:00:14.187945 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188045 master-0 kubenswrapper[4202]: I0223 13:00:14.188020 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188084 master-0 kubenswrapper[4202]: I0223 13:00:14.188053 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.188084 master-0 kubenswrapper[4202]: I0223 13:00:14.188077 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.188136 master-0 kubenswrapper[4202]: I0223 13:00:14.188097 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188136 master-0 kubenswrapper[4202]: I0223 13:00:14.188119 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188304 master-0 kubenswrapper[4202]: I0223 13:00:14.188267 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.188383 master-0 kubenswrapper[4202]: I0223 13:00:14.188296 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188444 master-0 kubenswrapper[4202]: I0223 13:00:14.188416 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188477 master-0 kubenswrapper[4202]: I0223 13:00:14.188461 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188508 master-0 kubenswrapper[4202]: I0223 13:00:14.188487 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188541 master-0 kubenswrapper[4202]: I0223 13:00:14.188514 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188541 master-0 kubenswrapper[4202]: I0223 13:00:14.188531 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.188596 master-0 kubenswrapper[4202]: I0223 13:00:14.188547 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.189248 master-0 kubenswrapper[4202]: I0223 13:00:14.189210 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.193996 master-0 kubenswrapper[4202]: I0223 13:00:14.193947 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.205646 master-0 kubenswrapper[4202]: I0223 13:00:14.205594 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.219088 master-0 kubenswrapper[4202]: I0223 13:00:14.219048 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:14.233963 master-0 kubenswrapper[4202]: W0223 13:00:14.233885 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c80f4d_6b28_44f4_beef_01e705260452.slice/crio-1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c WatchSource:0}: Error finding container 1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c: Status 404 returned error can't find the container with id 1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c Feb 23 13:00:14.289071 master-0 kubenswrapper[4202]: I0223 13:00:14.288985 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289071 master-0 kubenswrapper[4202]: I0223 13:00:14.289065 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289071 master-0 kubenswrapper[4202]: I0223 13:00:14.289092 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289523 master-0 kubenswrapper[4202]: I0223 13:00:14.289109 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289523 master-0 kubenswrapper[4202]: I0223 13:00:14.289142 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289523 master-0 kubenswrapper[4202]: I0223 13:00:14.289493 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289740 master-0 kubenswrapper[4202]: I0223 13:00:14.289536 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289740 master-0 kubenswrapper[4202]: I0223 13:00:14.289588 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289740 master-0 kubenswrapper[4202]: I0223 13:00:14.289591 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289740 master-0 kubenswrapper[4202]: I0223 13:00:14.289671 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289980 master-0 kubenswrapper[4202]: I0223 13:00:14.289766 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289980 master-0 kubenswrapper[4202]: I0223 13:00:14.289862 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.289980 master-0 kubenswrapper[4202]: I0223 13:00:14.289945 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.289987 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290013 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290036 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290059 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhhw\" (UniqueName: \"kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290089 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290112 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290140 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290149 master-0 kubenswrapper[4202]: I0223 13:00:14.290159 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290193 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290216 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290247 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290271 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290393 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290436 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290472 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290495 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290612 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.290655 master-0 kubenswrapper[4202]: I0223 13:00:14.290664 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.291190 master-0 kubenswrapper[4202]: I0223 13:00:14.290695 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.291190 master-0 kubenswrapper[4202]: I0223 13:00:14.290680 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.291190 master-0 kubenswrapper[4202]: I0223 13:00:14.290737 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.291190 master-0 kubenswrapper[4202]: I0223 13:00:14.290775 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.292540 master-0 kubenswrapper[4202]: I0223 13:00:14.292499 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.296108 master-0 kubenswrapper[4202]: I0223 13:00:14.292992 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.296927 master-0 kubenswrapper[4202]: I0223 13:00:14.296863 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.300750 master-0 kubenswrapper[4202]: I0223 13:00:14.300684 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.313208 master-0 kubenswrapper[4202]: I0223 13:00:14.313155 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhhw\" (UniqueName: \"kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw\") pod \"ovnkube-node-h75w9\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.409506 master-0 kubenswrapper[4202]: I0223 13:00:14.409453 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:14.427593 master-0 kubenswrapper[4202]: W0223 13:00:14.427521 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25bf18b4_6d82_47ec_b51f_1221045c2975.slice/crio-065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591 WatchSource:0}: Error finding container 065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591: Status 404 returned error can't find the container with id 065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591 Feb 23 13:00:14.532905 master-0 kubenswrapper[4202]: I0223 13:00:14.532815 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:14.533189 master-0 kubenswrapper[4202]: E0223 13:00:14.533098 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:14.534024 master-0 kubenswrapper[4202]: I0223 13:00:14.533890 4202 scope.go:117] "RemoveContainer" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" Feb 23 13:00:14.534329 master-0 kubenswrapper[4202]: E0223 13:00:14.534249 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 13:00:14.821228 master-0 kubenswrapper[4202]: I0223 13:00:14.821153 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591"} Feb 23 13:00:14.822951 master-0 kubenswrapper[4202]: I0223 13:00:14.822874 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerStarted","Data":"03b129d1bc8d649447c395807b48259d421cc23ca1ca83bcebde05d222777ccd"} Feb 23 13:00:14.823000 master-0 kubenswrapper[4202]: I0223 13:00:14.822974 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerStarted","Data":"1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c"} Feb 23 13:00:15.602502 master-0 kubenswrapper[4202]: I0223 13:00:15.602449 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:15.602822 master-0 kubenswrapper[4202]: E0223 13:00:15.602585 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:15.602822 master-0 kubenswrapper[4202]: E0223 13:00:15.602646 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:47.602625781 +0000 UTC m=+105.891487409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:15.828951 master-0 kubenswrapper[4202]: I0223 13:00:15.828895 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="0e189812d4682599d3015af9393b7f83c9c7e758eb4c42ea44314281d98f5ef5" exitCode=0 Feb 23 13:00:15.828951 master-0 kubenswrapper[4202]: I0223 13:00:15.828948 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"0e189812d4682599d3015af9393b7f83c9c7e758eb4c42ea44314281d98f5ef5"} Feb 23 13:00:16.532976 master-0 kubenswrapper[4202]: I0223 13:00:16.532897 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:16.533195 master-0 kubenswrapper[4202]: E0223 13:00:16.533055 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:17.080850 master-0 kubenswrapper[4202]: I0223 13:00:17.080811 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-rnz52"] Feb 23 13:00:17.081217 master-0 kubenswrapper[4202]: I0223 13:00:17.081178 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:17.081263 master-0 kubenswrapper[4202]: E0223 13:00:17.081245 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:17.218854 master-0 kubenswrapper[4202]: I0223 13:00:17.218715 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:17.320586 master-0 kubenswrapper[4202]: I0223 13:00:17.320512 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:17.346770 master-0 kubenswrapper[4202]: E0223 13:00:17.346703 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:17.346770 master-0 kubenswrapper[4202]: E0223 13:00:17.346755 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:17.346770 master-0 kubenswrapper[4202]: E0223 13:00:17.346777 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:17.347026 master-0 kubenswrapper[4202]: E0223 13:00:17.346869 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:17.846842434 +0000 UTC m=+76.135704152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:17.838758 master-0 kubenswrapper[4202]: I0223 13:00:17.838652 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4b8e974553a5805af4feb6c94d4d5c7568b29cb246442dd9b1691b86b9879742" exitCode=0 Feb 23 13:00:17.838758 master-0 kubenswrapper[4202]: I0223 13:00:17.838754 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"4b8e974553a5805af4feb6c94d4d5c7568b29cb246442dd9b1691b86b9879742"} Feb 23 13:00:17.926025 master-0 kubenswrapper[4202]: I0223 13:00:17.925320 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:17.926025 master-0 kubenswrapper[4202]: E0223 13:00:17.925531 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:17.926025 master-0 kubenswrapper[4202]: E0223 13:00:17.925553 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:17.926025 master-0 kubenswrapper[4202]: E0223 13:00:17.925569 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:17.926025 master-0 kubenswrapper[4202]: E0223 13:00:17.925631 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:18.925609581 +0000 UTC m=+77.214471219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:18.329553 master-0 kubenswrapper[4202]: I0223 13:00:18.329493 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:18.330057 master-0 kubenswrapper[4202]: E0223 13:00:18.329701 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:18.330057 master-0 kubenswrapper[4202]: E0223 13:00:18.329779 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:34.329755398 +0000 UTC m=+92.618617036 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:18.532983 master-0 kubenswrapper[4202]: I0223 13:00:18.532915 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:18.533226 master-0 kubenswrapper[4202]: I0223 13:00:18.533061 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:18.533226 master-0 kubenswrapper[4202]: E0223 13:00:18.533183 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:18.533316 master-0 kubenswrapper[4202]: E0223 13:00:18.533279 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:18.934729 master-0 kubenswrapper[4202]: I0223 13:00:18.934664 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:18.935000 master-0 kubenswrapper[4202]: E0223 13:00:18.934887 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:18.935000 master-0 kubenswrapper[4202]: E0223 13:00:18.934906 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:18.935000 master-0 kubenswrapper[4202]: E0223 13:00:18.934921 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:18.935130 master-0 kubenswrapper[4202]: E0223 13:00:18.935028 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:20.935011959 +0000 UTC m=+79.223873587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:19.682823 master-0 kubenswrapper[4202]: I0223 13:00:19.682764 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-zr6kv"] Feb 23 13:00:19.683573 master-0 kubenswrapper[4202]: I0223 13:00:19.683264 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.685683 master-0 kubenswrapper[4202]: I0223 13:00:19.685656 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:00:19.685829 master-0 kubenswrapper[4202]: I0223 13:00:19.685812 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:00:19.686279 master-0 kubenswrapper[4202]: I0223 13:00:19.686218 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:00:19.686327 master-0 kubenswrapper[4202]: I0223 13:00:19.686295 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:00:19.686414 master-0 kubenswrapper[4202]: I0223 13:00:19.686253 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:00:19.845062 master-0 kubenswrapper[4202]: I0223 13:00:19.844993 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.845062 master-0 kubenswrapper[4202]: I0223 13:00:19.845076 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.845394 master-0 kubenswrapper[4202]: I0223 13:00:19.845103 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.845394 master-0 kubenswrapper[4202]: I0223 13:00:19.845155 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.946700 master-0 kubenswrapper[4202]: I0223 13:00:19.946538 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.946700 master-0 kubenswrapper[4202]: I0223 13:00:19.946599 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.946700 master-0 kubenswrapper[4202]: I0223 13:00:19.946622 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.947192 master-0 kubenswrapper[4202]: I0223 13:00:19.946774 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.947192 master-0 kubenswrapper[4202]: E0223 13:00:19.946945 4202 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 23 13:00:19.947192 master-0 kubenswrapper[4202]: E0223 13:00:19.947041 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert podName:18386753-ec74-456d-838d-98c07c169b4b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:20.447015352 +0000 UTC m=+78.735877000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert") pod "network-node-identity-zr6kv" (UID: "18386753-ec74-456d-838d-98c07c169b4b") : secret "network-node-identity-cert" not found Feb 23 13:00:19.948037 master-0 kubenswrapper[4202]: I0223 13:00:19.947667 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.948255 master-0 kubenswrapper[4202]: I0223 13:00:19.948216 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:19.971977 master-0 kubenswrapper[4202]: I0223 13:00:19.971910 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:20.451965 master-0 kubenswrapper[4202]: I0223 13:00:20.451888 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:20.455924 master-0 kubenswrapper[4202]: I0223 13:00:20.455861 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:20.532027 master-0 kubenswrapper[4202]: I0223 13:00:20.531964 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:20.532304 master-0 kubenswrapper[4202]: I0223 13:00:20.532041 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:20.532304 master-0 kubenswrapper[4202]: E0223 13:00:20.532099 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:20.532304 master-0 kubenswrapper[4202]: E0223 13:00:20.532177 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:20.598935 master-0 kubenswrapper[4202]: I0223 13:00:20.598857 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:20.612575 master-0 kubenswrapper[4202]: W0223 13:00:20.612518 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18386753_ec74_456d_838d_98c07c169b4b.slice/crio-c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9 WatchSource:0}: Error finding container c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9: Status 404 returned error can't find the container with id c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9 Feb 23 13:00:20.855191 master-0 kubenswrapper[4202]: I0223 13:00:20.855119 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-zr6kv" event={"ID":"18386753-ec74-456d-838d-98c07c169b4b","Type":"ContainerStarted","Data":"c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9"} Feb 23 13:00:20.955923 master-0 kubenswrapper[4202]: I0223 13:00:20.955870 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:20.956170 master-0 kubenswrapper[4202]: E0223 13:00:20.956035 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:20.956170 master-0 kubenswrapper[4202]: E0223 13:00:20.956054 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:20.956170 master-0 kubenswrapper[4202]: E0223 13:00:20.956065 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:20.956170 master-0 kubenswrapper[4202]: E0223 13:00:20.956125 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:24.956108203 +0000 UTC m=+83.244969831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:22.532973 master-0 kubenswrapper[4202]: I0223 13:00:22.532916 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:22.532973 master-0 kubenswrapper[4202]: I0223 13:00:22.532936 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:22.534257 master-0 kubenswrapper[4202]: E0223 13:00:22.534180 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:22.534378 master-0 kubenswrapper[4202]: E0223 13:00:22.534326 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:24.532949 master-0 kubenswrapper[4202]: I0223 13:00:24.532879 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:24.534050 master-0 kubenswrapper[4202]: I0223 13:00:24.532984 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:24.534050 master-0 kubenswrapper[4202]: E0223 13:00:24.533714 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:24.534050 master-0 kubenswrapper[4202]: E0223 13:00:24.533936 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:25.001652 master-0 kubenswrapper[4202]: I0223 13:00:24.998909 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:25.001652 master-0 kubenswrapper[4202]: E0223 13:00:24.999126 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:25.001652 master-0 kubenswrapper[4202]: E0223 13:00:24.999147 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:25.001652 master-0 kubenswrapper[4202]: E0223 13:00:24.999159 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:25.001652 master-0 kubenswrapper[4202]: E0223 13:00:24.999221 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:32.999203961 +0000 UTC m=+91.288065589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:25.179441 master-0 kubenswrapper[4202]: I0223 13:00:25.179237 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:00:26.532595 master-0 kubenswrapper[4202]: I0223 13:00:26.532533 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:26.533104 master-0 kubenswrapper[4202]: I0223 13:00:26.532652 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:26.533104 master-0 kubenswrapper[4202]: E0223 13:00:26.532791 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:26.533104 master-0 kubenswrapper[4202]: E0223 13:00:26.533041 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:27.533112 master-0 kubenswrapper[4202]: I0223 13:00:27.533043 4202 scope.go:117] "RemoveContainer" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" Feb 23 13:00:28.533687 master-0 kubenswrapper[4202]: I0223 13:00:28.533069 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:28.533687 master-0 kubenswrapper[4202]: I0223 13:00:28.533134 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:28.533687 master-0 kubenswrapper[4202]: E0223 13:00:28.533224 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:28.533687 master-0 kubenswrapper[4202]: E0223 13:00:28.533325 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:30.532591 master-0 kubenswrapper[4202]: I0223 13:00:30.532419 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:30.532591 master-0 kubenswrapper[4202]: E0223 13:00:30.532555 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:30.533151 master-0 kubenswrapper[4202]: I0223 13:00:30.532331 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:30.533151 master-0 kubenswrapper[4202]: E0223 13:00:30.533111 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:30.895509 master-0 kubenswrapper[4202]: I0223 13:00:30.893793 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" exitCode=0 Feb 23 13:00:30.895509 master-0 kubenswrapper[4202]: I0223 13:00:30.894018 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} Feb 23 13:00:30.900198 master-0 kubenswrapper[4202]: I0223 13:00:30.900165 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/3.log" Feb 23 13:00:30.900720 master-0 kubenswrapper[4202]: I0223 13:00:30.900682 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"1145acd6528f641fe4dba004ec108b22fd6a9f58b87118602acd22f6be1e6680"} Feb 23 13:00:30.905650 master-0 kubenswrapper[4202]: I0223 13:00:30.905606 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerStarted","Data":"d00db22a72ea4aa1ec65791429e5f61e982e0efe4b37e51163034797dd496f23"} Feb 23 13:00:30.920640 master-0 kubenswrapper[4202]: I0223 13:00:30.911904 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=6.91189087 podStartE2EDuration="6.91189087s" podCreationTimestamp="2026-02-23 13:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:30.909631424 +0000 UTC m=+89.198493062" watchObservedRunningTime="2026-02-23 13:00:30.91189087 +0000 UTC m=+89.200752498" Feb 23 13:00:30.920640 master-0 kubenswrapper[4202]: I0223 13:00:30.913009 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerStarted","Data":"2187448d7b4208e3e1befa756c107826cc44935cd19819e30170d5f0d754f882"} Feb 23 13:00:30.949650 master-0 kubenswrapper[4202]: I0223 13:00:30.949563 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" podStartSLOduration=1.806682248 podStartE2EDuration="17.949535873s" podCreationTimestamp="2026-02-23 13:00:13 +0000 UTC" firstStartedPulling="2026-02-23 13:00:14.443102926 +0000 UTC m=+72.731964564" lastFinishedPulling="2026-02-23 13:00:30.585956561 +0000 UTC m=+88.874818189" observedRunningTime="2026-02-23 13:00:30.948492566 +0000 UTC m=+89.237354184" watchObservedRunningTime="2026-02-23 13:00:30.949535873 +0000 UTC m=+89.238397501" Feb 23 13:00:30.963497 master-0 kubenswrapper[4202]: I0223 13:00:30.960889 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=46.960840556 podStartE2EDuration="46.960840556s" podCreationTimestamp="2026-02-23 12:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:30.960131198 +0000 UTC m=+89.248992846" watchObservedRunningTime="2026-02-23 13:00:30.960840556 +0000 UTC m=+89.249702174" Feb 23 13:00:31.920808 master-0 kubenswrapper[4202]: I0223 13:00:31.920142 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} Feb 23 13:00:31.920808 master-0 kubenswrapper[4202]: I0223 13:00:31.920804 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} Feb 23 13:00:31.921726 master-0 kubenswrapper[4202]: I0223 13:00:31.920822 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} Feb 23 13:00:31.921726 master-0 kubenswrapper[4202]: I0223 13:00:31.920835 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} Feb 23 13:00:31.921726 master-0 kubenswrapper[4202]: I0223 13:00:31.920848 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:31.921726 master-0 kubenswrapper[4202]: I0223 13:00:31.920861 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:31.923516 master-0 kubenswrapper[4202]: I0223 13:00:31.923386 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-zr6kv" event={"ID":"18386753-ec74-456d-838d-98c07c169b4b","Type":"ContainerStarted","Data":"d01166f75613e8876ca557628e42fc7b26709f163770565d233c3c09b10f65ff"} Feb 23 13:00:31.923516 master-0 kubenswrapper[4202]: I0223 13:00:31.923433 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-zr6kv" event={"ID":"18386753-ec74-456d-838d-98c07c169b4b","Type":"ContainerStarted","Data":"9637c38b0806c61412770c6d54101d7a1c16b7eaae669849a06ba561b39d5ae2"} Feb 23 13:00:31.928948 master-0 kubenswrapper[4202]: I0223 13:00:31.928602 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="d00db22a72ea4aa1ec65791429e5f61e982e0efe4b37e51163034797dd496f23" exitCode=0 Feb 23 13:00:31.928948 master-0 kubenswrapper[4202]: I0223 13:00:31.928629 4202 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4239f8be57b6158b6fa0698dec86bee3b9d4f017ada846bb7d788ccf7bd49862" exitCode=0 Feb 23 13:00:31.928948 master-0 kubenswrapper[4202]: I0223 13:00:31.928748 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"d00db22a72ea4aa1ec65791429e5f61e982e0efe4b37e51163034797dd496f23"} Feb 23 13:00:31.928948 master-0 kubenswrapper[4202]: I0223 13:00:31.928821 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerDied","Data":"4239f8be57b6158b6fa0698dec86bee3b9d4f017ada846bb7d788ccf7bd49862"} Feb 23 13:00:31.945152 master-0 kubenswrapper[4202]: I0223 13:00:31.945040 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-zr6kv" podStartSLOduration=2.238749837 podStartE2EDuration="12.945003752s" podCreationTimestamp="2026-02-23 13:00:19 +0000 UTC" firstStartedPulling="2026-02-23 13:00:20.616421249 +0000 UTC m=+78.905282877" lastFinishedPulling="2026-02-23 13:00:31.322675174 +0000 UTC m=+89.611536792" observedRunningTime="2026-02-23 13:00:31.942306454 +0000 UTC m=+90.231168112" watchObservedRunningTime="2026-02-23 13:00:31.945003752 +0000 UTC m=+90.233865450" Feb 23 13:00:32.532119 master-0 kubenswrapper[4202]: I0223 13:00:32.531997 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:32.532756 master-0 kubenswrapper[4202]: I0223 13:00:32.532154 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:32.534758 master-0 kubenswrapper[4202]: E0223 13:00:32.534675 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:32.534882 master-0 kubenswrapper[4202]: E0223 13:00:32.534774 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:32.548084 master-0 kubenswrapper[4202]: W0223 13:00:32.547997 4202 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 13:00:32.549867 master-0 kubenswrapper[4202]: I0223 13:00:32.549789 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:00:32.940287 master-0 kubenswrapper[4202]: I0223 13:00:32.940014 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-srlm4" event={"ID":"99f14e64-228f-4b9e-991f-ee398fe7bb8a","Type":"ContainerStarted","Data":"40031c8fd43194d6e97016387af88d9582da44a2f68e4044603c5679f8688988"} Feb 23 13:00:32.990114 master-0 kubenswrapper[4202]: I0223 13:00:32.989946 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-srlm4" podStartSLOduration=3.516534709 podStartE2EDuration="31.989910268s" podCreationTimestamp="2026-02-23 13:00:01 +0000 UTC" firstStartedPulling="2026-02-23 13:00:02.035359549 +0000 UTC m=+60.324221177" lastFinishedPulling="2026-02-23 13:00:30.508735108 +0000 UTC m=+88.797596736" observedRunningTime="2026-02-23 13:00:32.989256592 +0000 UTC m=+91.278118270" watchObservedRunningTime="2026-02-23 13:00:32.989910268 +0000 UTC m=+91.278771936" Feb 23 13:00:32.990426 master-0 kubenswrapper[4202]: I0223 13:00:32.990251 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.990241537 podStartE2EDuration="990.241537ms" podCreationTimestamp="2026-02-23 13:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:32.958851321 +0000 UTC m=+91.247712999" watchObservedRunningTime="2026-02-23 13:00:32.990241537 +0000 UTC m=+91.279103195" Feb 23 13:00:33.073648 master-0 kubenswrapper[4202]: I0223 13:00:33.073507 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:33.073900 master-0 kubenswrapper[4202]: E0223 13:00:33.073815 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:33.074012 master-0 kubenswrapper[4202]: E0223 13:00:33.073898 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:33.074012 master-0 kubenswrapper[4202]: E0223 13:00:33.073931 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:33.074120 master-0 kubenswrapper[4202]: E0223 13:00:33.074061 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:49.074019333 +0000 UTC m=+107.362881031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:33.948421 master-0 kubenswrapper[4202]: I0223 13:00:33.948307 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} Feb 23 13:00:34.385810 master-0 kubenswrapper[4202]: I0223 13:00:34.385698 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:34.386054 master-0 kubenswrapper[4202]: E0223 13:00:34.386001 4202 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:34.386227 master-0 kubenswrapper[4202]: E0223 13:00:34.386179 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.38613174 +0000 UTC m=+124.674993408 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:34.532601 master-0 kubenswrapper[4202]: I0223 13:00:34.532481 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:34.532984 master-0 kubenswrapper[4202]: I0223 13:00:34.532604 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:34.532984 master-0 kubenswrapper[4202]: E0223 13:00:34.532704 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:34.532984 master-0 kubenswrapper[4202]: E0223 13:00:34.532835 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:36.532727 master-0 kubenswrapper[4202]: I0223 13:00:36.532190 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:36.534115 master-0 kubenswrapper[4202]: I0223 13:00:36.532237 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:36.534115 master-0 kubenswrapper[4202]: E0223 13:00:36.532840 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:36.534115 master-0 kubenswrapper[4202]: E0223 13:00:36.533883 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:36.974784 master-0 kubenswrapper[4202]: I0223 13:00:36.974595 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerStarted","Data":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} Feb 23 13:00:36.975475 master-0 kubenswrapper[4202]: I0223 13:00:36.975405 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:36.975539 master-0 kubenswrapper[4202]: I0223 13:00:36.975500 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:36.975539 master-0 kubenswrapper[4202]: I0223 13:00:36.975522 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:37.070734 master-0 kubenswrapper[4202]: I0223 13:00:37.070151 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podStartSLOduration=6.928681406 podStartE2EDuration="23.070127736s" podCreationTimestamp="2026-02-23 13:00:14 +0000 UTC" firstStartedPulling="2026-02-23 13:00:14.431427494 +0000 UTC m=+72.720289132" lastFinishedPulling="2026-02-23 13:00:30.572873834 +0000 UTC m=+88.861735462" observedRunningTime="2026-02-23 13:00:37.070099835 +0000 UTC m=+95.358961493" watchObservedRunningTime="2026-02-23 13:00:37.070127736 +0000 UTC m=+95.358989364" Feb 23 13:00:37.072171 master-0 kubenswrapper[4202]: I0223 13:00:37.072063 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:37.072282 master-0 kubenswrapper[4202]: I0223 13:00:37.072252 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:37.547953 master-0 kubenswrapper[4202]: I0223 13:00:37.547837 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:00:38.532637 master-0 kubenswrapper[4202]: I0223 13:00:38.532535 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:38.532637 master-0 kubenswrapper[4202]: I0223 13:00:38.532588 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:38.533001 master-0 kubenswrapper[4202]: E0223 13:00:38.532778 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:38.533001 master-0 kubenswrapper[4202]: E0223 13:00:38.532952 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:38.555720 master-0 kubenswrapper[4202]: I0223 13:00:38.555658 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbrcr"] Feb 23 13:00:38.559880 master-0 kubenswrapper[4202]: I0223 13:00:38.559811 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-rnz52"] Feb 23 13:00:38.981747 master-0 kubenswrapper[4202]: I0223 13:00:38.981588 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:38.982036 master-0 kubenswrapper[4202]: I0223 13:00:38.981714 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:38.982279 master-0 kubenswrapper[4202]: E0223 13:00:38.982237 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:38.982515 master-0 kubenswrapper[4202]: E0223 13:00:38.982455 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:39.723636 master-0 kubenswrapper[4202]: I0223 13:00:39.723577 4202 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h75w9"] Feb 23 13:00:39.985110 master-0 kubenswrapper[4202]: I0223 13:00:39.984976 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-controller" containerID="cri-o://c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" gracePeriod=30 Feb 23 13:00:39.985110 master-0 kubenswrapper[4202]: I0223 13:00:39.985052 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="northd" containerID="cri-o://4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" gracePeriod=30 Feb 23 13:00:39.985351 master-0 kubenswrapper[4202]: I0223 13:00:39.985130 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" gracePeriod=30 Feb 23 13:00:39.985351 master-0 kubenswrapper[4202]: I0223 13:00:39.985215 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-acl-logging" containerID="cri-o://1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" gracePeriod=30 Feb 23 13:00:39.985351 master-0 kubenswrapper[4202]: I0223 13:00:39.985196 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="sbdb" containerID="cri-o://a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" gracePeriod=30 Feb 23 13:00:39.985351 master-0 kubenswrapper[4202]: I0223 13:00:39.985263 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="nbdb" containerID="cri-o://c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" gracePeriod=30 Feb 23 13:00:39.985556 master-0 kubenswrapper[4202]: I0223 13:00:39.985504 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-node" containerID="cri-o://02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" gracePeriod=30 Feb 23 13:00:40.011740 master-0 kubenswrapper[4202]: I0223 13:00:40.010487 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=3.010452019 podStartE2EDuration="3.010452019s" podCreationTimestamp="2026-02-23 13:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:40.009432535 +0000 UTC m=+98.298294223" watchObservedRunningTime="2026-02-23 13:00:40.010452019 +0000 UTC m=+98.299313687" Feb 23 13:00:40.013967 master-0 kubenswrapper[4202]: I0223 13:00:40.013866 4202 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovnkube-controller" containerID="cri-o://683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" gracePeriod=30 Feb 23 13:00:40.270168 master-0 kubenswrapper[4202]: I0223 13:00:40.270111 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovnkube-controller/0.log" Feb 23 13:00:40.272680 master-0 kubenswrapper[4202]: I0223 13:00:40.272640 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/kube-rbac-proxy-ovn-metrics/0.log" Feb 23 13:00:40.273528 master-0 kubenswrapper[4202]: I0223 13:00:40.273492 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/kube-rbac-proxy-node/0.log" Feb 23 13:00:40.274053 master-0 kubenswrapper[4202]: I0223 13:00:40.274016 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovn-acl-logging/0.log" Feb 23 13:00:40.274639 master-0 kubenswrapper[4202]: I0223 13:00:40.274601 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovn-controller/0.log" Feb 23 13:00:40.275440 master-0 kubenswrapper[4202]: I0223 13:00:40.275404 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:40.335529 master-0 kubenswrapper[4202]: I0223 13:00:40.335446 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qz8dt"] Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335614 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-controller" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335635 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-controller" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335649 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335663 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335675 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovnkube-controller" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335691 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovnkube-controller" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335706 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-node" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335718 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-node" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335734 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-acl-logging" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335746 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-acl-logging" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335759 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="nbdb" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: I0223 13:00:40.335773 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="nbdb" Feb 23 13:00:40.335786 master-0 kubenswrapper[4202]: E0223 13:00:40.335788 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="northd" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335800 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="northd" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: E0223 13:00:40.335815 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kubecfg-setup" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335827 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kubecfg-setup" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: E0223 13:00:40.335841 4202 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="sbdb" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335852 4202 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="sbdb" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335915 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335930 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="nbdb" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335943 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="sbdb" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335955 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="northd" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335970 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="kube-rbac-proxy-node" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335983 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-controller" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.335995 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovnkube-controller" Feb 23 13:00:40.336275 master-0 kubenswrapper[4202]: I0223 13:00:40.336030 4202 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerName="ovn-acl-logging" Feb 23 13:00:40.337169 master-0 kubenswrapper[4202]: I0223 13:00:40.337134 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.341071 master-0 kubenswrapper[4202]: I0223 13:00:40.341006 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341173 master-0 kubenswrapper[4202]: I0223 13:00:40.341079 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341173 master-0 kubenswrapper[4202]: I0223 13:00:40.341121 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341173 master-0 kubenswrapper[4202]: I0223 13:00:40.341161 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341183 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341202 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341237 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341274 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log" (OuterVolumeSpecName: "node-log") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341241 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341365 master-0 kubenswrapper[4202]: I0223 13:00:40.341246 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341433 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341463 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341522 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341559 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341605 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341677 master-0 kubenswrapper[4202]: I0223 13:00:40.341633 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341695 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341267 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash" (OuterVolumeSpecName: "host-slash") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341723 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341794 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341509 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341781 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341828 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341878 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341651 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341916 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket" (OuterVolumeSpecName: "log-socket") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341730 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341740 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.341974 master-0 kubenswrapper[4202]: I0223 13:00:40.341983 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342078 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342116 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342240 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342274 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342308 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342366 4202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhhw\" (UniqueName: \"kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw\") pod \"25bf18b4-6d82-47ec-b51f-1221045c2975\" (UID: \"25bf18b4-6d82-47ec-b51f-1221045c2975\") " Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342450 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342659 4202 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.342689 master-0 kubenswrapper[4202]: I0223 13:00:40.342704 4202 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342721 4202 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342734 4202 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342749 4202 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342786 4202 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342798 4202 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342812 4202 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342826 4202 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-node-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342823 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342837 4202 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342873 4202 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342885 4202 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342898 4202 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342910 4202 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342955 4202 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.343210 master-0 kubenswrapper[4202]: I0223 13:00:40.342966 4202 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.347830 master-0 kubenswrapper[4202]: I0223 13:00:40.347742 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw" (OuterVolumeSpecName: "kube-api-access-jrhhw") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "kube-api-access-jrhhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:00:40.348456 master-0 kubenswrapper[4202]: I0223 13:00:40.348107 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:00:40.355294 master-0 kubenswrapper[4202]: I0223 13:00:40.355236 4202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "25bf18b4-6d82-47ec-b51f-1221045c2975" (UID: "25bf18b4-6d82-47ec-b51f-1221045c2975"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:40.444239 master-0 kubenswrapper[4202]: I0223 13:00:40.444025 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444239 master-0 kubenswrapper[4202]: I0223 13:00:40.444231 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444322 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444416 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444464 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444501 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444540 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444575 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444616 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444673 master-0 kubenswrapper[4202]: I0223 13:00:40.444654 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444959 master-0 kubenswrapper[4202]: I0223 13:00:40.444686 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444959 master-0 kubenswrapper[4202]: I0223 13:00:40.444722 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444959 master-0 kubenswrapper[4202]: I0223 13:00:40.444754 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444959 master-0 kubenswrapper[4202]: I0223 13:00:40.444788 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.444959 master-0 kubenswrapper[4202]: I0223 13:00:40.444843 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445301 master-0 kubenswrapper[4202]: I0223 13:00:40.444985 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445301 master-0 kubenswrapper[4202]: I0223 13:00:40.445126 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445301 master-0 kubenswrapper[4202]: I0223 13:00:40.445197 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445301 master-0 kubenswrapper[4202]: I0223 13:00:40.445238 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445301 master-0 kubenswrapper[4202]: I0223 13:00:40.445278 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.445530 master-0 kubenswrapper[4202]: I0223 13:00:40.445392 4202 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25bf18b4-6d82-47ec-b51f-1221045c2975-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.445530 master-0 kubenswrapper[4202]: I0223 13:00:40.445431 4202 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25bf18b4-6d82-47ec-b51f-1221045c2975-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.445530 master-0 kubenswrapper[4202]: I0223 13:00:40.445458 4202 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25bf18b4-6d82-47ec-b51f-1221045c2975-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.445530 master-0 kubenswrapper[4202]: I0223 13:00:40.445478 4202 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhhw\" (UniqueName: \"kubernetes.io/projected/25bf18b4-6d82-47ec-b51f-1221045c2975-kube-api-access-jrhhw\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:40.532728 master-0 kubenswrapper[4202]: I0223 13:00:40.532647 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:40.532927 master-0 kubenswrapper[4202]: I0223 13:00:40.532823 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:40.532986 master-0 kubenswrapper[4202]: E0223 13:00:40.532905 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:40.533179 master-0 kubenswrapper[4202]: E0223 13:00:40.533109 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:40.546694 master-0 kubenswrapper[4202]: I0223 13:00:40.546551 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.546694 master-0 kubenswrapper[4202]: I0223 13:00:40.546638 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.546963 master-0 kubenswrapper[4202]: I0223 13:00:40.546732 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.546963 master-0 kubenswrapper[4202]: I0223 13:00:40.546845 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.546963 master-0 kubenswrapper[4202]: I0223 13:00:40.546906 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.546963 master-0 kubenswrapper[4202]: I0223 13:00:40.546946 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547255 master-0 kubenswrapper[4202]: I0223 13:00:40.547182 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547359 master-0 kubenswrapper[4202]: I0223 13:00:40.547318 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547422 master-0 kubenswrapper[4202]: I0223 13:00:40.547370 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547457 master-0 kubenswrapper[4202]: I0223 13:00:40.547425 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547518 master-0 kubenswrapper[4202]: I0223 13:00:40.547488 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547561 master-0 kubenswrapper[4202]: I0223 13:00:40.547538 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547720 master-0 kubenswrapper[4202]: I0223 13:00:40.547676 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547773 master-0 kubenswrapper[4202]: I0223 13:00:40.547720 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547773 master-0 kubenswrapper[4202]: I0223 13:00:40.547759 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547834 master-0 kubenswrapper[4202]: I0223 13:00:40.547784 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.547988 master-0 kubenswrapper[4202]: I0223 13:00:40.547910 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548295 master-0 kubenswrapper[4202]: I0223 13:00:40.548243 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548366 master-0 kubenswrapper[4202]: I0223 13:00:40.548315 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548658 master-0 kubenswrapper[4202]: I0223 13:00:40.548386 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548658 master-0 kubenswrapper[4202]: I0223 13:00:40.548458 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548658 master-0 kubenswrapper[4202]: I0223 13:00:40.548489 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548658 master-0 kubenswrapper[4202]: I0223 13:00:40.548567 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548658 master-0 kubenswrapper[4202]: I0223 13:00:40.548553 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548863 master-0 kubenswrapper[4202]: I0223 13:00:40.548661 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.548863 master-0 kubenswrapper[4202]: I0223 13:00:40.548762 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549624 master-0 kubenswrapper[4202]: I0223 13:00:40.549578 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549703 master-0 kubenswrapper[4202]: I0223 13:00:40.549674 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549784 master-0 kubenswrapper[4202]: I0223 13:00:40.549746 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549833 master-0 kubenswrapper[4202]: I0223 13:00:40.549786 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549833 master-0 kubenswrapper[4202]: I0223 13:00:40.549800 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549833 master-0 kubenswrapper[4202]: I0223 13:00:40.549829 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549921 master-0 kubenswrapper[4202]: I0223 13:00:40.549838 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549921 master-0 kubenswrapper[4202]: I0223 13:00:40.549865 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.549921 master-0 kubenswrapper[4202]: I0223 13:00:40.549750 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.550015 master-0 kubenswrapper[4202]: I0223 13:00:40.549947 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.550015 master-0 kubenswrapper[4202]: I0223 13:00:40.549994 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.550073 master-0 kubenswrapper[4202]: I0223 13:00:40.549987 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.552090 master-0 kubenswrapper[4202]: I0223 13:00:40.552045 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.574112 master-0 kubenswrapper[4202]: I0223 13:00:40.574036 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.670236 master-0 kubenswrapper[4202]: I0223 13:00:40.670164 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:40.695429 master-0 kubenswrapper[4202]: W0223 13:00:40.695329 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod540b41b0_f574_46b9_8b2f_19e90ad5d0ce.slice/crio-e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90 WatchSource:0}: Error finding container e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90: Status 404 returned error can't find the container with id e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90 Feb 23 13:00:40.995714 master-0 kubenswrapper[4202]: I0223 13:00:40.995642 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovnkube-controller/0.log" Feb 23 13:00:40.998917 master-0 kubenswrapper[4202]: I0223 13:00:40.998850 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/kube-rbac-proxy-ovn-metrics/0.log" Feb 23 13:00:40.999966 master-0 kubenswrapper[4202]: I0223 13:00:40.999901 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/kube-rbac-proxy-node/0.log" Feb 23 13:00:41.000824 master-0 kubenswrapper[4202]: I0223 13:00:41.000780 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovn-acl-logging/0.log" Feb 23 13:00:41.001755 master-0 kubenswrapper[4202]: I0223 13:00:41.001716 4202 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h75w9_25bf18b4-6d82-47ec-b51f-1221045c2975/ovn-controller/0.log" Feb 23 13:00:41.002521 master-0 kubenswrapper[4202]: I0223 13:00:41.002459 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" exitCode=2 Feb 23 13:00:41.002521 master-0 kubenswrapper[4202]: I0223 13:00:41.002511 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" exitCode=0 Feb 23 13:00:41.002717 master-0 kubenswrapper[4202]: I0223 13:00:41.002508 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} Feb 23 13:00:41.002717 master-0 kubenswrapper[4202]: I0223 13:00:41.002594 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} Feb 23 13:00:41.002717 master-0 kubenswrapper[4202]: I0223 13:00:41.002621 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} Feb 23 13:00:41.002717 master-0 kubenswrapper[4202]: I0223 13:00:41.002657 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.002717 master-0 kubenswrapper[4202]: I0223 13:00:41.002677 4202 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002529 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" exitCode=0 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002824 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" exitCode=0 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002855 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" exitCode=143 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002870 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" exitCode=143 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002884 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" exitCode=143 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002898 4202 generic.go:334] "Generic (PLEG): container finished" podID="25bf18b4-6d82-47ec-b51f-1221045c2975" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" exitCode=143 Feb 23 13:00:41.003059 master-0 kubenswrapper[4202]: I0223 13:00:41.002988 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} Feb 23 13:00:41.003498 master-0 kubenswrapper[4202]: I0223 13:00:41.003095 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} Feb 23 13:00:41.003498 master-0 kubenswrapper[4202]: I0223 13:00:41.003133 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} Feb 23 13:00:41.003498 master-0 kubenswrapper[4202]: I0223 13:00:41.003170 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003506 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003530 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003559 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003589 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003609 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003626 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003644 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} Feb 23 13:00:41.003661 master-0 kubenswrapper[4202]: I0223 13:00:41.003661 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003683 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003698 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003713 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003725 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003743 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003763 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003778 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003790 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003803 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003816 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003828 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003840 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003853 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003865 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003881 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h75w9" event={"ID":"25bf18b4-6d82-47ec-b51f-1221045c2975","Type":"ContainerDied","Data":"065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003898 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003914 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003926 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003937 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003949 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003961 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003974 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003985 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} Feb 23 13:00:41.004101 master-0 kubenswrapper[4202]: I0223 13:00:41.003999 4202 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} Feb 23 13:00:41.007062 master-0 kubenswrapper[4202]: I0223 13:00:41.006891 4202 generic.go:334] "Generic (PLEG): container finished" podID="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" containerID="3dce0cc5f97bf43d2b56ee91d574aa374ea8564835a1d8988f603b6c0033063a" exitCode=0 Feb 23 13:00:41.007136 master-0 kubenswrapper[4202]: I0223 13:00:41.007076 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerDied","Data":"3dce0cc5f97bf43d2b56ee91d574aa374ea8564835a1d8988f603b6c0033063a"} Feb 23 13:00:41.007211 master-0 kubenswrapper[4202]: I0223 13:00:41.007154 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90"} Feb 23 13:00:41.032035 master-0 kubenswrapper[4202]: I0223 13:00:41.031979 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.033514 master-0 kubenswrapper[4202]: I0223 13:00:41.033444 4202 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h75w9"] Feb 23 13:00:41.044041 master-0 kubenswrapper[4202]: I0223 13:00:41.043956 4202 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h75w9"] Feb 23 13:00:41.051677 master-0 kubenswrapper[4202]: I0223 13:00:41.051612 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.081849 master-0 kubenswrapper[4202]: I0223 13:00:41.081781 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.134560 master-0 kubenswrapper[4202]: I0223 13:00:41.134470 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.151330 master-0 kubenswrapper[4202]: I0223 13:00:41.151283 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.169241 master-0 kubenswrapper[4202]: I0223 13:00:41.169179 4202 scope.go:117] "RemoveContainer" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.191930 master-0 kubenswrapper[4202]: I0223 13:00:41.191865 4202 scope.go:117] "RemoveContainer" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.220033 master-0 kubenswrapper[4202]: I0223 13:00:41.219412 4202 scope.go:117] "RemoveContainer" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.239956 master-0 kubenswrapper[4202]: I0223 13:00:41.239818 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.240588 master-0 kubenswrapper[4202]: E0223 13:00:41.240509 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.240729 master-0 kubenswrapper[4202]: I0223 13:00:41.240591 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} err="failed to get container status \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" Feb 23 13:00:41.240729 master-0 kubenswrapper[4202]: I0223 13:00:41.240659 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.241485 master-0 kubenswrapper[4202]: E0223 13:00:41.241422 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.241611 master-0 kubenswrapper[4202]: I0223 13:00:41.241497 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} err="failed to get container status \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" Feb 23 13:00:41.241611 master-0 kubenswrapper[4202]: I0223 13:00:41.241550 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.242132 master-0 kubenswrapper[4202]: E0223 13:00:41.242038 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.242232 master-0 kubenswrapper[4202]: I0223 13:00:41.242114 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} err="failed to get container status \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" Feb 23 13:00:41.242232 master-0 kubenswrapper[4202]: I0223 13:00:41.242158 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.243064 master-0 kubenswrapper[4202]: E0223 13:00:41.243022 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.243194 master-0 kubenswrapper[4202]: I0223 13:00:41.243071 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} err="failed to get container status \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" Feb 23 13:00:41.243194 master-0 kubenswrapper[4202]: I0223 13:00:41.243106 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.243724 master-0 kubenswrapper[4202]: E0223 13:00:41.243635 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.243724 master-0 kubenswrapper[4202]: I0223 13:00:41.243680 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} err="failed to get container status \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" Feb 23 13:00:41.243724 master-0 kubenswrapper[4202]: I0223 13:00:41.243708 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.244526 master-0 kubenswrapper[4202]: E0223 13:00:41.244431 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.244526 master-0 kubenswrapper[4202]: I0223 13:00:41.244496 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} err="failed to get container status \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" Feb 23 13:00:41.244526 master-0 kubenswrapper[4202]: I0223 13:00:41.244528 4202 scope.go:117] "RemoveContainer" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.245086 master-0 kubenswrapper[4202]: E0223 13:00:41.245015 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": container with ID starting with 1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d not found: ID does not exist" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.245175 master-0 kubenswrapper[4202]: I0223 13:00:41.245081 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} err="failed to get container status \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": rpc error: code = NotFound desc = could not find container \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": container with ID starting with 1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d not found: ID does not exist" Feb 23 13:00:41.245175 master-0 kubenswrapper[4202]: I0223 13:00:41.245124 4202 scope.go:117] "RemoveContainer" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.245883 master-0 kubenswrapper[4202]: E0223 13:00:41.245825 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": container with ID starting with c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219 not found: ID does not exist" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.246047 master-0 kubenswrapper[4202]: I0223 13:00:41.245890 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} err="failed to get container status \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": rpc error: code = NotFound desc = could not find container \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": container with ID starting with c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219 not found: ID does not exist" Feb 23 13:00:41.246047 master-0 kubenswrapper[4202]: I0223 13:00:41.245931 4202 scope.go:117] "RemoveContainer" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.246607 master-0 kubenswrapper[4202]: E0223 13:00:41.246488 4202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": container with ID starting with 5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4 not found: ID does not exist" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.246848 master-0 kubenswrapper[4202]: I0223 13:00:41.246598 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} err="failed to get container status \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": rpc error: code = NotFound desc = could not find container \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": container with ID starting with 5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4 not found: ID does not exist" Feb 23 13:00:41.246848 master-0 kubenswrapper[4202]: I0223 13:00:41.246632 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.247209 master-0 kubenswrapper[4202]: I0223 13:00:41.247151 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} err="failed to get container status \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" Feb 23 13:00:41.247209 master-0 kubenswrapper[4202]: I0223 13:00:41.247190 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.247967 master-0 kubenswrapper[4202]: I0223 13:00:41.247877 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} err="failed to get container status \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" Feb 23 13:00:41.247967 master-0 kubenswrapper[4202]: I0223 13:00:41.247921 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.248592 master-0 kubenswrapper[4202]: I0223 13:00:41.248527 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} err="failed to get container status \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" Feb 23 13:00:41.248720 master-0 kubenswrapper[4202]: I0223 13:00:41.248601 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.249256 master-0 kubenswrapper[4202]: I0223 13:00:41.249181 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} err="failed to get container status \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" Feb 23 13:00:41.249256 master-0 kubenswrapper[4202]: I0223 13:00:41.249232 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.249971 master-0 kubenswrapper[4202]: I0223 13:00:41.249893 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} err="failed to get container status \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" Feb 23 13:00:41.250070 master-0 kubenswrapper[4202]: I0223 13:00:41.249976 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.251094 master-0 kubenswrapper[4202]: I0223 13:00:41.251032 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} err="failed to get container status \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" Feb 23 13:00:41.251311 master-0 kubenswrapper[4202]: I0223 13:00:41.251096 4202 scope.go:117] "RemoveContainer" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.251738 master-0 kubenswrapper[4202]: I0223 13:00:41.251695 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} err="failed to get container status \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": rpc error: code = NotFound desc = could not find container \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": container with ID starting with 1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d not found: ID does not exist" Feb 23 13:00:41.251738 master-0 kubenswrapper[4202]: I0223 13:00:41.251736 4202 scope.go:117] "RemoveContainer" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.252280 master-0 kubenswrapper[4202]: I0223 13:00:41.252210 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} err="failed to get container status \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": rpc error: code = NotFound desc = could not find container \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": container with ID starting with c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219 not found: ID does not exist" Feb 23 13:00:41.252280 master-0 kubenswrapper[4202]: I0223 13:00:41.252252 4202 scope.go:117] "RemoveContainer" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.252863 master-0 kubenswrapper[4202]: I0223 13:00:41.252790 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} err="failed to get container status \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": rpc error: code = NotFound desc = could not find container \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": container with ID starting with 5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4 not found: ID does not exist" Feb 23 13:00:41.252863 master-0 kubenswrapper[4202]: I0223 13:00:41.252833 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.253416 master-0 kubenswrapper[4202]: I0223 13:00:41.253336 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} err="failed to get container status \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" Feb 23 13:00:41.253514 master-0 kubenswrapper[4202]: I0223 13:00:41.253413 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.254011 master-0 kubenswrapper[4202]: I0223 13:00:41.253934 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} err="failed to get container status \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" Feb 23 13:00:41.254106 master-0 kubenswrapper[4202]: I0223 13:00:41.254015 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.254789 master-0 kubenswrapper[4202]: I0223 13:00:41.254739 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} err="failed to get container status \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" Feb 23 13:00:41.254789 master-0 kubenswrapper[4202]: I0223 13:00:41.254783 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.255315 master-0 kubenswrapper[4202]: I0223 13:00:41.255265 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} err="failed to get container status \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" Feb 23 13:00:41.255430 master-0 kubenswrapper[4202]: I0223 13:00:41.255327 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.256009 master-0 kubenswrapper[4202]: I0223 13:00:41.255964 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} err="failed to get container status \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" Feb 23 13:00:41.256009 master-0 kubenswrapper[4202]: I0223 13:00:41.256012 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.256585 master-0 kubenswrapper[4202]: I0223 13:00:41.256521 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} err="failed to get container status \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" Feb 23 13:00:41.256585 master-0 kubenswrapper[4202]: I0223 13:00:41.256573 4202 scope.go:117] "RemoveContainer" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.257514 master-0 kubenswrapper[4202]: I0223 13:00:41.257312 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} err="failed to get container status \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": rpc error: code = NotFound desc = could not find container \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": container with ID starting with 1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d not found: ID does not exist" Feb 23 13:00:41.257514 master-0 kubenswrapper[4202]: I0223 13:00:41.257403 4202 scope.go:117] "RemoveContainer" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.257896 master-0 kubenswrapper[4202]: I0223 13:00:41.257843 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} err="failed to get container status \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": rpc error: code = NotFound desc = could not find container \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": container with ID starting with c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219 not found: ID does not exist" Feb 23 13:00:41.257971 master-0 kubenswrapper[4202]: I0223 13:00:41.257891 4202 scope.go:117] "RemoveContainer" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.260155 master-0 kubenswrapper[4202]: I0223 13:00:41.259209 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} err="failed to get container status \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": rpc error: code = NotFound desc = could not find container \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": container with ID starting with 5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4 not found: ID does not exist" Feb 23 13:00:41.260155 master-0 kubenswrapper[4202]: I0223 13:00:41.259297 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.260155 master-0 kubenswrapper[4202]: I0223 13:00:41.259997 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} err="failed to get container status \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" Feb 23 13:00:41.260155 master-0 kubenswrapper[4202]: I0223 13:00:41.260053 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.260939 master-0 kubenswrapper[4202]: I0223 13:00:41.260671 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} err="failed to get container status \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" Feb 23 13:00:41.260939 master-0 kubenswrapper[4202]: I0223 13:00:41.260706 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.261323 master-0 kubenswrapper[4202]: I0223 13:00:41.261231 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} err="failed to get container status \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" Feb 23 13:00:41.261323 master-0 kubenswrapper[4202]: I0223 13:00:41.261302 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.262245 master-0 kubenswrapper[4202]: I0223 13:00:41.262156 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} err="failed to get container status \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" Feb 23 13:00:41.262411 master-0 kubenswrapper[4202]: I0223 13:00:41.262247 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.263075 master-0 kubenswrapper[4202]: I0223 13:00:41.262938 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} err="failed to get container status \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" Feb 23 13:00:41.263075 master-0 kubenswrapper[4202]: I0223 13:00:41.262985 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.263745 master-0 kubenswrapper[4202]: I0223 13:00:41.263656 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} err="failed to get container status \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" Feb 23 13:00:41.263745 master-0 kubenswrapper[4202]: I0223 13:00:41.263719 4202 scope.go:117] "RemoveContainer" containerID="1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d" Feb 23 13:00:41.264535 master-0 kubenswrapper[4202]: I0223 13:00:41.264475 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d"} err="failed to get container status \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": rpc error: code = NotFound desc = could not find container \"1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d\": container with ID starting with 1e92d9ad95c2647205799dc1953db32d61e11a2064c9cc7a95f5b5083708dc8d not found: ID does not exist" Feb 23 13:00:41.264535 master-0 kubenswrapper[4202]: I0223 13:00:41.264507 4202 scope.go:117] "RemoveContainer" containerID="c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219" Feb 23 13:00:41.265051 master-0 kubenswrapper[4202]: I0223 13:00:41.264979 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219"} err="failed to get container status \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": rpc error: code = NotFound desc = could not find container \"c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219\": container with ID starting with c92e78e71fed7a2d3de7cfe8b4996fa909350eb45c916c01d52673fc9ba9a219 not found: ID does not exist" Feb 23 13:00:41.265051 master-0 kubenswrapper[4202]: I0223 13:00:41.265023 4202 scope.go:117] "RemoveContainer" containerID="5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4" Feb 23 13:00:41.265856 master-0 kubenswrapper[4202]: I0223 13:00:41.265784 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4"} err="failed to get container status \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": rpc error: code = NotFound desc = could not find container \"5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4\": container with ID starting with 5f8d5f7ae89a69c9d327a5ebc0053da075aa07d47d5b8f773a2d2447a72c5dc4 not found: ID does not exist" Feb 23 13:00:41.265856 master-0 kubenswrapper[4202]: I0223 13:00:41.265821 4202 scope.go:117] "RemoveContainer" containerID="683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e" Feb 23 13:00:41.266416 master-0 kubenswrapper[4202]: I0223 13:00:41.266289 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e"} err="failed to get container status \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": rpc error: code = NotFound desc = could not find container \"683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e\": container with ID starting with 683813d4b78d30ac39ba55c5e88aa66c87461a0d377890cbac6a16b53a20a95e not found: ID does not exist" Feb 23 13:00:41.266416 master-0 kubenswrapper[4202]: I0223 13:00:41.266334 4202 scope.go:117] "RemoveContainer" containerID="a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970" Feb 23 13:00:41.266889 master-0 kubenswrapper[4202]: I0223 13:00:41.266815 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970"} err="failed to get container status \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": rpc error: code = NotFound desc = could not find container \"a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970\": container with ID starting with a43da2297c859db20e13c90a4008e0b4864906d792bde0cf9d2a527107133970 not found: ID does not exist" Feb 23 13:00:41.266889 master-0 kubenswrapper[4202]: I0223 13:00:41.266855 4202 scope.go:117] "RemoveContainer" containerID="c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87" Feb 23 13:00:41.267839 master-0 kubenswrapper[4202]: I0223 13:00:41.267759 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87"} err="failed to get container status \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": rpc error: code = NotFound desc = could not find container \"c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87\": container with ID starting with c9afa31fdcafe4c7132b554e1fcbc4849ae481230c3e56cc073656c7ec63cf87 not found: ID does not exist" Feb 23 13:00:41.267839 master-0 kubenswrapper[4202]: I0223 13:00:41.267817 4202 scope.go:117] "RemoveContainer" containerID="4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4" Feb 23 13:00:41.268405 master-0 kubenswrapper[4202]: I0223 13:00:41.268288 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4"} err="failed to get container status \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": rpc error: code = NotFound desc = could not find container \"4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4\": container with ID starting with 4f2622e8fbeb40b0ad2506ba42b36809b982cd24c5d98fcda791e68d181893c4 not found: ID does not exist" Feb 23 13:00:41.268405 master-0 kubenswrapper[4202]: I0223 13:00:41.268336 4202 scope.go:117] "RemoveContainer" containerID="062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09" Feb 23 13:00:41.268841 master-0 kubenswrapper[4202]: I0223 13:00:41.268767 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09"} err="failed to get container status \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": rpc error: code = NotFound desc = could not find container \"062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09\": container with ID starting with 062f7833d441f87cc402e6a48aa22a1faf07f3ffefde88b760f6fa5e444c9e09 not found: ID does not exist" Feb 23 13:00:41.268841 master-0 kubenswrapper[4202]: I0223 13:00:41.268827 4202 scope.go:117] "RemoveContainer" containerID="02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee" Feb 23 13:00:41.269446 master-0 kubenswrapper[4202]: I0223 13:00:41.269382 4202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee"} err="failed to get container status \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": rpc error: code = NotFound desc = could not find container \"02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee\": container with ID starting with 02a0b72afee33b4b2080691ed04edf93eae59285c96668ac80361fa53f3732ee not found: ID does not exist" Feb 23 13:00:42.021839 master-0 kubenswrapper[4202]: I0223 13:00:42.021739 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"9e82defbcfdd8831f335b1c355122c74c72f821b81f5b06fcbea2702d38dce2b"} Feb 23 13:00:42.023291 master-0 kubenswrapper[4202]: I0223 13:00:42.021923 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"444554e9423c07c3b3ad7e484b7ff4cf4bc20a230c388b4e43db93e3d4fc1ec0"} Feb 23 13:00:42.023291 master-0 kubenswrapper[4202]: I0223 13:00:42.021955 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"2038967c60e1a26a53001a8ff9ceb49e790c0b3a83b11331c1faf8f28643bcc4"} Feb 23 13:00:42.023291 master-0 kubenswrapper[4202]: I0223 13:00:42.021975 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"0d195f56c59694fd9fae4bf7f0c62df9f3ef3c7eab88547fd2daf2d8e50e12cc"} Feb 23 13:00:42.023291 master-0 kubenswrapper[4202]: I0223 13:00:42.021994 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"64a42a68575cf7ade1cb310705c8e8ee9f266556cda0ce2d51a7c641ab0ce72f"} Feb 23 13:00:42.023291 master-0 kubenswrapper[4202]: I0223 13:00:42.022012 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"c18689476f274f603d29a105b5cd42568b32e024fab2e4dcaa4782ce896d6f74"} Feb 23 13:00:42.532042 master-0 kubenswrapper[4202]: I0223 13:00:42.531953 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:42.532042 master-0 kubenswrapper[4202]: I0223 13:00:42.532004 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:42.536966 master-0 kubenswrapper[4202]: E0223 13:00:42.536881 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:42.537101 master-0 kubenswrapper[4202]: E0223 13:00:42.537025 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:42.539170 master-0 kubenswrapper[4202]: I0223 13:00:42.539096 4202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bf18b4-6d82-47ec-b51f-1221045c2975" path="/var/lib/kubelet/pods/25bf18b4-6d82-47ec-b51f-1221045c2975/volumes" Feb 23 13:00:44.533125 master-0 kubenswrapper[4202]: I0223 13:00:44.533021 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:44.534132 master-0 kubenswrapper[4202]: E0223 13:00:44.533194 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:44.534132 master-0 kubenswrapper[4202]: I0223 13:00:44.533024 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:44.534132 master-0 kubenswrapper[4202]: E0223 13:00:44.533376 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:45.039085 master-0 kubenswrapper[4202]: I0223 13:00:45.039020 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"0d27538c2f14a19093dfab183182de5b8616432547f92a5bb9bca268a439bf51"} Feb 23 13:00:46.532707 master-0 kubenswrapper[4202]: I0223 13:00:46.532172 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:46.533921 master-0 kubenswrapper[4202]: E0223 13:00:46.532764 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:46.533921 master-0 kubenswrapper[4202]: I0223 13:00:46.532316 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:46.533921 master-0 kubenswrapper[4202]: E0223 13:00:46.533192 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:47.059013 master-0 kubenswrapper[4202]: I0223 13:00:47.058935 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" event={"ID":"540b41b0-f574-46b9-8b2f-19e90ad5d0ce","Type":"ContainerStarted","Data":"a4721987c1c9b0b9e5310b35e36660ee05a472d4486e761d0360dcdc57161997"} Feb 23 13:00:47.059405 master-0 kubenswrapper[4202]: I0223 13:00:47.059364 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:47.089722 master-0 kubenswrapper[4202]: I0223 13:00:47.089632 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:47.182797 master-0 kubenswrapper[4202]: I0223 13:00:47.182694 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" podStartSLOduration=7.182668038 podStartE2EDuration="7.182668038s" podCreationTimestamp="2026-02-23 13:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:47.18195343 +0000 UTC m=+105.470815148" watchObservedRunningTime="2026-02-23 13:00:47.182668038 +0000 UTC m=+105.471529666" Feb 23 13:00:47.623555 master-0 kubenswrapper[4202]: I0223 13:00:47.623486 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:47.624562 master-0 kubenswrapper[4202]: E0223 13:00:47.623704 4202 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:47.624562 master-0 kubenswrapper[4202]: E0223 13:00:47.623831 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:51.623792181 +0000 UTC m=+169.912653849 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:48.063506 master-0 kubenswrapper[4202]: I0223 13:00:48.063393 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:48.063506 master-0 kubenswrapper[4202]: I0223 13:00:48.063449 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:48.133852 master-0 kubenswrapper[4202]: I0223 13:00:48.133741 4202 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:48.532727 master-0 kubenswrapper[4202]: I0223 13:00:48.532643 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:48.533036 master-0 kubenswrapper[4202]: I0223 13:00:48.532805 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:48.533036 master-0 kubenswrapper[4202]: E0223 13:00:48.532936 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:48.533205 master-0 kubenswrapper[4202]: E0223 13:00:48.533039 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:49.140799 master-0 kubenswrapper[4202]: I0223 13:00:49.140698 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:49.145067 master-0 kubenswrapper[4202]: E0223 13:00:49.140876 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:49.145067 master-0 kubenswrapper[4202]: E0223 13:00:49.140908 4202 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:49.145067 master-0 kubenswrapper[4202]: E0223 13:00:49.140925 4202 projected.go:194] Error preparing data for projected volume kube-api-access-tmrjc for pod openshift-network-diagnostics/network-check-target-rnz52: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:49.145067 master-0 kubenswrapper[4202]: E0223 13:00:49.140986 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc podName:f81886b9-fcd3-4666-b550-0688072210f7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.1409637 +0000 UTC m=+139.429825338 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmrjc" (UniqueName: "kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc") pod "network-check-target-rnz52" (UID: "f81886b9-fcd3-4666-b550-0688072210f7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:50.532848 master-0 kubenswrapper[4202]: I0223 13:00:50.532768 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:50.533999 master-0 kubenswrapper[4202]: I0223 13:00:50.532986 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:50.533999 master-0 kubenswrapper[4202]: E0223 13:00:50.533306 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbrcr" podUID="e941c759-ab95-4b30-a571-6c132ab0e639" Feb 23 13:00:50.533999 master-0 kubenswrapper[4202]: E0223 13:00:50.533498 4202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rnz52" podUID="f81886b9-fcd3-4666-b550-0688072210f7" Feb 23 13:00:52.054225 master-0 kubenswrapper[4202]: I0223 13:00:52.053793 4202 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 23 13:00:52.055092 master-0 kubenswrapper[4202]: I0223 13:00:52.054326 4202 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 23 13:00:52.106368 master-0 kubenswrapper[4202]: I0223 13:00:52.104907 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd"] Feb 23 13:00:52.106368 master-0 kubenswrapper[4202]: I0223 13:00:52.106104 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.108110 master-0 kubenswrapper[4202]: I0223 13:00:52.108040 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4"] Feb 23 13:00:52.108888 master-0 kubenswrapper[4202]: I0223 13:00:52.108851 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4"] Feb 23 13:00:52.108949 master-0 kubenswrapper[4202]: I0223 13:00:52.108911 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.109834 master-0 kubenswrapper[4202]: I0223 13:00:52.109781 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:00:52.109883 master-0 kubenswrapper[4202]: I0223 13:00:52.109866 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.114375 master-0 kubenswrapper[4202]: I0223 13:00:52.111680 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.114375 master-0 kubenswrapper[4202]: I0223 13:00:52.112094 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:00:52.122244 master-0 kubenswrapper[4202]: I0223 13:00:52.119241 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz"] Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132203 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132477 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132732 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132898 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132972 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.133098 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.132928 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.133211 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.133361 master-0 kubenswrapper[4202]: I0223 13:00:52.133329 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:52.134169 master-0 kubenswrapper[4202]: I0223 13:00:52.133489 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:00:52.136443 master-0 kubenswrapper[4202]: I0223 13:00:52.134246 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z"] Feb 23 13:00:52.136443 master-0 kubenswrapper[4202]: I0223 13:00:52.134894 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.136443 master-0 kubenswrapper[4202]: I0223 13:00:52.135791 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:00:52.136443 master-0 kubenswrapper[4202]: I0223 13:00:52.135944 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.136443 master-0 kubenswrapper[4202]: I0223 13:00:52.136247 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl"] Feb 23 13:00:52.137038 master-0 kubenswrapper[4202]: I0223 13:00:52.136615 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.138566 master-0 kubenswrapper[4202]: I0223 13:00:52.137100 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld"] Feb 23 13:00:52.138566 master-0 kubenswrapper[4202]: I0223 13:00:52.137392 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.138566 master-0 kubenswrapper[4202]: I0223 13:00:52.138278 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr"] Feb 23 13:00:52.139542 master-0 kubenswrapper[4202]: I0223 13:00:52.138944 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.139542 master-0 kubenswrapper[4202]: I0223 13:00:52.138983 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:00:52.139542 master-0 kubenswrapper[4202]: I0223 13:00:52.139209 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.140840 master-0 kubenswrapper[4202]: I0223 13:00:52.139615 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:00:52.140840 master-0 kubenswrapper[4202]: I0223 13:00:52.139938 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt"] Feb 23 13:00:52.140840 master-0 kubenswrapper[4202]: I0223 13:00:52.140440 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.140840 master-0 kubenswrapper[4202]: I0223 13:00:52.140524 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.140840 master-0 kubenswrapper[4202]: I0223 13:00:52.140439 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-588zk"] Feb 23 13:00:52.141433 master-0 kubenswrapper[4202]: I0223 13:00:52.141243 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w"] Feb 23 13:00:52.142291 master-0 kubenswrapper[4202]: I0223 13:00:52.141640 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.142291 master-0 kubenswrapper[4202]: I0223 13:00:52.141890 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.149806 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45"] Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.150332 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.151374 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.151484 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.151674 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.151844 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-g8fdn"] Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.151926 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152024 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152113 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152257 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p"] Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152436 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152612 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.152738 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.153137 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.154250 master-0 kubenswrapper[4202]: I0223 13:00:52.153679 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:52.156619 master-0 kubenswrapper[4202]: I0223 13:00:52.155153 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:00:52.156619 master-0 kubenswrapper[4202]: I0223 13:00:52.155384 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:00:52.156780 master-0 kubenswrapper[4202]: I0223 13:00:52.156668 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:00:52.157550 master-0 kubenswrapper[4202]: I0223 13:00:52.156837 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.157550 master-0 kubenswrapper[4202]: I0223 13:00:52.157007 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:00:52.157550 master-0 kubenswrapper[4202]: I0223 13:00:52.157167 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:00:52.157550 master-0 kubenswrapper[4202]: I0223 13:00:52.157310 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:00:52.157550 master-0 kubenswrapper[4202]: I0223 13:00:52.157508 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:52.157723 master-0 kubenswrapper[4202]: I0223 13:00:52.157677 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.157953 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.158259 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.158378 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.158561 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.158839 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj"] Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.159369 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h"] Feb 23 13:00:52.159651 master-0 kubenswrapper[4202]: I0223 13:00:52.159629 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.159948 master-0 kubenswrapper[4202]: I0223 13:00:52.159685 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-k9h69"] Feb 23 13:00:52.159948 master-0 kubenswrapper[4202]: I0223 13:00:52.159803 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.161249 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g"] Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.161561 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-qg27h"] Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.161839 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4"] Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.161939 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.162150 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.162299 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.162387 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.162618 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.162989 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4"] Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.163054 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:00:52.163884 master-0 kubenswrapper[4202]: I0223 13:00:52.163375 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.165546 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.166053 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz"] Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.168631 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.169328 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.170426 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.170643 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:00:52.171749 master-0 kubenswrapper[4202]: I0223 13:00:52.171465 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.172148 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.172429 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.173131 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd"] Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.173179 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z"] Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.173149 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.174148 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:00:52.175187 master-0 kubenswrapper[4202]: I0223 13:00:52.174316 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:00:52.177378 master-0 kubenswrapper[4202]: I0223 13:00:52.177287 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:00:52.177816 master-0 kubenswrapper[4202]: I0223 13:00:52.177507 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:00:52.177816 master-0 kubenswrapper[4202]: I0223 13:00:52.177528 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.179980 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180089 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180194 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180319 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180467 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180496 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180526 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180545 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180581 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180599 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180640 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180668 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180686 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180710 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180733 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180820 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr"] Feb 23 13:00:52.182082 master-0 kubenswrapper[4202]: I0223 13:00:52.180837 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.180881 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.180931 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181066 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181113 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181157 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181211 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181245 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181315 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181368 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181400 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181438 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181467 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181507 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181545 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.182575 master-0 kubenswrapper[4202]: I0223 13:00:52.181598 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181613 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181645 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181824 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181887 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181917 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.181989 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182013 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182037 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182060 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182039 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl"] Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182138 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182161 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182141 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182238 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182536 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:00:52.182946 master-0 kubenswrapper[4202]: I0223 13:00:52.182930 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-g8fdn"] Feb 23 13:00:52.183648 master-0 kubenswrapper[4202]: I0223 13:00:52.183614 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld"] Feb 23 13:00:52.183648 master-0 kubenswrapper[4202]: I0223 13:00:52.183640 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:00:52.184374 master-0 kubenswrapper[4202]: I0223 13:00:52.184316 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:00:52.185079 master-0 kubenswrapper[4202]: I0223 13:00:52.185060 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:00:52.186788 master-0 kubenswrapper[4202]: I0223 13:00:52.186756 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h"] Feb 23 13:00:52.187218 master-0 kubenswrapper[4202]: I0223 13:00:52.187072 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:00:52.187638 master-0 kubenswrapper[4202]: I0223 13:00:52.187593 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w"] Feb 23 13:00:52.188766 master-0 kubenswrapper[4202]: I0223 13:00:52.188650 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:00:52.189000 master-0 kubenswrapper[4202]: I0223 13:00:52.188975 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:00:52.189056 master-0 kubenswrapper[4202]: I0223 13:00:52.188974 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:52.190062 master-0 kubenswrapper[4202]: I0223 13:00:52.190022 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:00:52.191417 master-0 kubenswrapper[4202]: I0223 13:00:52.191401 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-k9h69"] Feb 23 13:00:52.192863 master-0 kubenswrapper[4202]: I0223 13:00:52.192798 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt"] Feb 23 13:00:52.194022 master-0 kubenswrapper[4202]: I0223 13:00:52.194002 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g"] Feb 23 13:00:52.195218 master-0 kubenswrapper[4202]: I0223 13:00:52.195205 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj"] Feb 23 13:00:52.196236 master-0 kubenswrapper[4202]: I0223 13:00:52.196221 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45"] Feb 23 13:00:52.198117 master-0 kubenswrapper[4202]: I0223 13:00:52.198101 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p"] Feb 23 13:00:52.199231 master-0 kubenswrapper[4202]: I0223 13:00:52.199079 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-588zk"] Feb 23 13:00:52.200090 master-0 kubenswrapper[4202]: I0223 13:00:52.200038 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:00:52.283132 master-0 kubenswrapper[4202]: I0223 13:00:52.283091 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:52.283243 master-0 kubenswrapper[4202]: I0223 13:00:52.283134 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.283243 master-0 kubenswrapper[4202]: I0223 13:00:52.283163 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.283243 master-0 kubenswrapper[4202]: I0223 13:00:52.283182 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.283521 master-0 kubenswrapper[4202]: I0223 13:00:52.283473 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.283627 master-0 kubenswrapper[4202]: I0223 13:00:52.283541 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.283627 master-0 kubenswrapper[4202]: I0223 13:00:52.283592 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.283724 master-0 kubenswrapper[4202]: I0223 13:00:52.283632 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.283724 master-0 kubenswrapper[4202]: I0223 13:00:52.283662 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.283724 master-0 kubenswrapper[4202]: I0223 13:00:52.283690 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.283836 master-0 kubenswrapper[4202]: I0223 13:00:52.283816 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.283873 master-0 kubenswrapper[4202]: I0223 13:00:52.283861 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.283908 master-0 kubenswrapper[4202]: I0223 13:00:52.283890 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.283956 master-0 kubenswrapper[4202]: I0223 13:00:52.283915 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.284206 master-0 kubenswrapper[4202]: I0223 13:00:52.284103 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.284261 master-0 kubenswrapper[4202]: I0223 13:00:52.284229 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.284303 master-0 kubenswrapper[4202]: I0223 13:00:52.284277 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.284358 master-0 kubenswrapper[4202]: I0223 13:00:52.284308 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.284423 master-0 kubenswrapper[4202]: I0223 13:00:52.284397 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.285317 master-0 kubenswrapper[4202]: I0223 13:00:52.285282 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.285387 master-0 kubenswrapper[4202]: I0223 13:00:52.285326 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.285531 master-0 kubenswrapper[4202]: I0223 13:00:52.285499 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.285590 master-0 kubenswrapper[4202]: I0223 13:00:52.285534 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.286426 master-0 kubenswrapper[4202]: I0223 13:00:52.286301 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.286426 master-0 kubenswrapper[4202]: I0223 13:00:52.286392 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.286510 master-0 kubenswrapper[4202]: I0223 13:00:52.286423 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.286510 master-0 kubenswrapper[4202]: I0223 13:00:52.286475 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.286573 master-0 kubenswrapper[4202]: I0223 13:00:52.286511 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.286573 master-0 kubenswrapper[4202]: I0223 13:00:52.286557 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.286664 master-0 kubenswrapper[4202]: I0223 13:00:52.286586 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.286664 master-0 kubenswrapper[4202]: I0223 13:00:52.286618 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.286664 master-0 kubenswrapper[4202]: I0223 13:00:52.286645 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.286809 master-0 kubenswrapper[4202]: I0223 13:00:52.286669 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.286809 master-0 kubenswrapper[4202]: I0223 13:00:52.286700 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.286809 master-0 kubenswrapper[4202]: I0223 13:00:52.286725 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.286809 master-0 kubenswrapper[4202]: I0223 13:00:52.286751 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.286809 master-0 kubenswrapper[4202]: I0223 13:00:52.286801 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.286844 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.286872 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.286899 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.286928 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.286954 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.286995 master-0 kubenswrapper[4202]: I0223 13:00:52.287000 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.287255 master-0 kubenswrapper[4202]: I0223 13:00:52.287034 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.287255 master-0 kubenswrapper[4202]: I0223 13:00:52.287066 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.287255 master-0 kubenswrapper[4202]: I0223 13:00:52.287096 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.287255 master-0 kubenswrapper[4202]: I0223 13:00:52.287121 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.287255 master-0 kubenswrapper[4202]: I0223 13:00:52.287150 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.287487 master-0 kubenswrapper[4202]: I0223 13:00:52.287188 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.287552 master-0 kubenswrapper[4202]: I0223 13:00:52.287487 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.287552 master-0 kubenswrapper[4202]: I0223 13:00:52.287514 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.287552 master-0 kubenswrapper[4202]: I0223 13:00:52.287539 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.287662 master-0 kubenswrapper[4202]: I0223 13:00:52.287568 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.288478 master-0 kubenswrapper[4202]: E0223 13:00:52.288422 4202 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:52.288566 master-0 kubenswrapper[4202]: I0223 13:00:52.288535 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.288566 master-0 kubenswrapper[4202]: E0223 13:00:52.288548 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.788510231 +0000 UTC m=+111.077372059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:52.288798 master-0 kubenswrapper[4202]: I0223 13:00:52.288769 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.289170 master-0 kubenswrapper[4202]: I0223 13:00:52.289138 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.289574 master-0 kubenswrapper[4202]: E0223 13:00:52.289498 4202 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:52.289615 master-0 kubenswrapper[4202]: E0223 13:00:52.289578 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.789555076 +0000 UTC m=+111.078416904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:52.289678 master-0 kubenswrapper[4202]: I0223 13:00:52.289653 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.289730 master-0 kubenswrapper[4202]: I0223 13:00:52.289698 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.289730 master-0 kubenswrapper[4202]: I0223 13:00:52.289725 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.289787 master-0 kubenswrapper[4202]: I0223 13:00:52.289757 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.289814 master-0 kubenswrapper[4202]: I0223 13:00:52.289789 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.289841 master-0 kubenswrapper[4202]: E0223 13:00:52.289794 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:52.289841 master-0 kubenswrapper[4202]: I0223 13:00:52.289820 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.289890 master-0 kubenswrapper[4202]: I0223 13:00:52.289849 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.289920 master-0 kubenswrapper[4202]: E0223 13:00:52.289894 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.789869575 +0000 UTC m=+111.078731203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:52.289954 master-0 kubenswrapper[4202]: I0223 13:00:52.289930 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.289983 master-0 kubenswrapper[4202]: I0223 13:00:52.289963 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.290010 master-0 kubenswrapper[4202]: I0223 13:00:52.289988 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.290038 master-0 kubenswrapper[4202]: I0223 13:00:52.290012 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.290542 master-0 kubenswrapper[4202]: I0223 13:00:52.290061 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.290542 master-0 kubenswrapper[4202]: E0223 13:00:52.290140 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:52.290542 master-0 kubenswrapper[4202]: I0223 13:00:52.290121 4202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.290542 master-0 kubenswrapper[4202]: I0223 13:00:52.290308 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.290542 master-0 kubenswrapper[4202]: E0223 13:00:52.290398 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.790321086 +0000 UTC m=+111.079182724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:52.290704 master-0 kubenswrapper[4202]: E0223 13:00:52.290565 4202 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:52.290733 master-0 kubenswrapper[4202]: E0223 13:00:52.290708 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.790695465 +0000 UTC m=+111.079557093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:52.290993 master-0 kubenswrapper[4202]: I0223 13:00:52.290956 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.291161 master-0 kubenswrapper[4202]: I0223 13:00:52.291126 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.291833 master-0 kubenswrapper[4202]: I0223 13:00:52.291784 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.292458 master-0 kubenswrapper[4202]: I0223 13:00:52.292422 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.292765 master-0 kubenswrapper[4202]: I0223 13:00:52.292728 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.292814 master-0 kubenswrapper[4202]: I0223 13:00:52.292742 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.293097 master-0 kubenswrapper[4202]: I0223 13:00:52.293067 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.293214 master-0 kubenswrapper[4202]: I0223 13:00:52.293162 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.293679 master-0 kubenswrapper[4202]: I0223 13:00:52.293656 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.293772 master-0 kubenswrapper[4202]: I0223 13:00:52.293672 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.293976 master-0 kubenswrapper[4202]: I0223 13:00:52.293941 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.294036 master-0 kubenswrapper[4202]: I0223 13:00:52.294010 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.295038 master-0 kubenswrapper[4202]: I0223 13:00:52.294998 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.310726 master-0 kubenswrapper[4202]: I0223 13:00:52.310674 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.314664 master-0 kubenswrapper[4202]: I0223 13:00:52.314619 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.314978 master-0 kubenswrapper[4202]: I0223 13:00:52.314940 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.316064 master-0 kubenswrapper[4202]: I0223 13:00:52.316035 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.316720 master-0 kubenswrapper[4202]: I0223 13:00:52.316689 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.317410 master-0 kubenswrapper[4202]: I0223 13:00:52.317104 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.319167 master-0 kubenswrapper[4202]: I0223 13:00:52.319135 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.320068 master-0 kubenswrapper[4202]: I0223 13:00:52.320025 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.322233 master-0 kubenswrapper[4202]: I0223 13:00:52.322176 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.322699 master-0 kubenswrapper[4202]: I0223 13:00:52.322654 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.331608 master-0 kubenswrapper[4202]: I0223 13:00:52.331548 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.333052 master-0 kubenswrapper[4202]: I0223 13:00:52.333000 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.391589 master-0 kubenswrapper[4202]: I0223 13:00:52.391530 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.391715 master-0 kubenswrapper[4202]: I0223 13:00:52.391605 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.391832 master-0 kubenswrapper[4202]: I0223 13:00:52.391786 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.391886 master-0 kubenswrapper[4202]: I0223 13:00:52.391852 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.392113 master-0 kubenswrapper[4202]: I0223 13:00:52.392051 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.392200 master-0 kubenswrapper[4202]: I0223 13:00:52.392137 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.392514 master-0 kubenswrapper[4202]: E0223 13:00:52.392477 4202 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:52.392668 master-0 kubenswrapper[4202]: E0223 13:00:52.392650 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.892622126 +0000 UTC m=+111.181483764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:52.393070 master-0 kubenswrapper[4202]: I0223 13:00:52.393016 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.393478 master-0 kubenswrapper[4202]: I0223 13:00:52.393417 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.393559 master-0 kubenswrapper[4202]: I0223 13:00:52.393482 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.393559 master-0 kubenswrapper[4202]: I0223 13:00:52.393546 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.393642 master-0 kubenswrapper[4202]: I0223 13:00:52.393571 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.393682 master-0 kubenswrapper[4202]: I0223 13:00:52.393648 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.393733 master-0 kubenswrapper[4202]: I0223 13:00:52.393711 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.393778 master-0 kubenswrapper[4202]: I0223 13:00:52.393762 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.393818 master-0 kubenswrapper[4202]: I0223 13:00:52.393802 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.393857 master-0 kubenswrapper[4202]: I0223 13:00:52.393829 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.393894 master-0 kubenswrapper[4202]: I0223 13:00:52.393859 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:52.393894 master-0 kubenswrapper[4202]: I0223 13:00:52.393886 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.393966 master-0 kubenswrapper[4202]: I0223 13:00:52.393932 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.393966 master-0 kubenswrapper[4202]: I0223 13:00:52.393959 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.394035 master-0 kubenswrapper[4202]: I0223 13:00:52.393985 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.394035 master-0 kubenswrapper[4202]: I0223 13:00:52.394026 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.394201 master-0 kubenswrapper[4202]: E0223 13:00:52.394167 4202 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:52.394254 master-0 kubenswrapper[4202]: I0223 13:00:52.394178 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.394254 master-0 kubenswrapper[4202]: E0223 13:00:52.394238 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.894213826 +0000 UTC m=+111.183075684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:52.394409 master-0 kubenswrapper[4202]: E0223 13:00:52.394387 4202 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:52.394532 master-0 kubenswrapper[4202]: E0223 13:00:52.394484 4202 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:52.394597 master-0 kubenswrapper[4202]: E0223 13:00:52.394508 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.894491503 +0000 UTC m=+111.183353141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:52.394644 master-0 kubenswrapper[4202]: E0223 13:00:52.394616 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:52.894579535 +0000 UTC m=+111.183441203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:52.394762 master-0 kubenswrapper[4202]: I0223 13:00:52.394730 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.395117 master-0 kubenswrapper[4202]: I0223 13:00:52.395009 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.395188 master-0 kubenswrapper[4202]: I0223 13:00:52.395164 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.395269 master-0 kubenswrapper[4202]: I0223 13:00:52.395219 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.395402 master-0 kubenswrapper[4202]: I0223 13:00:52.395329 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.395487 master-0 kubenswrapper[4202]: I0223 13:00:52.395444 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.395562 master-0 kubenswrapper[4202]: I0223 13:00:52.395530 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.396147 master-0 kubenswrapper[4202]: I0223 13:00:52.396095 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.396214 master-0 kubenswrapper[4202]: I0223 13:00:52.396120 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.396214 master-0 kubenswrapper[4202]: I0223 13:00:52.396192 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.396379 master-0 kubenswrapper[4202]: I0223 13:00:52.396333 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.396616 master-0 kubenswrapper[4202]: I0223 13:00:52.396572 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.396684 master-0 kubenswrapper[4202]: I0223 13:00:52.396659 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.396795 master-0 kubenswrapper[4202]: I0223 13:00:52.396766 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.398273 master-0 kubenswrapper[4202]: I0223 13:00:52.398238 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.413154 master-0 kubenswrapper[4202]: I0223 13:00:52.413092 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.423536 master-0 kubenswrapper[4202]: I0223 13:00:52.423492 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.474399 master-0 kubenswrapper[4202]: I0223 13:00:52.474190 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.482580 master-0 kubenswrapper[4202]: I0223 13:00:52.482530 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:52.485848 master-0 kubenswrapper[4202]: I0223 13:00:52.485818 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:52.495929 master-0 kubenswrapper[4202]: I0223 13:00:52.495854 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:52.503271 master-0 kubenswrapper[4202]: I0223 13:00:52.503219 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:52.510661 master-0 kubenswrapper[4202]: I0223 13:00:52.510608 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.512515 master-0 kubenswrapper[4202]: I0223 13:00:52.512444 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:52.527539 master-0 kubenswrapper[4202]: I0223 13:00:52.527477 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.531416 master-0 kubenswrapper[4202]: I0223 13:00:52.531364 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:52.532246 master-0 kubenswrapper[4202]: I0223 13:00:52.532195 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:52.532509 master-0 kubenswrapper[4202]: I0223 13:00:52.532364 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:52.536627 master-0 kubenswrapper[4202]: I0223 13:00:52.536577 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:52.547091 master-0 kubenswrapper[4202]: I0223 13:00:52.547032 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:52.552139 master-0 kubenswrapper[4202]: I0223 13:00:52.551915 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.571777 master-0 kubenswrapper[4202]: I0223 13:00:52.571044 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.578384 master-0 kubenswrapper[4202]: I0223 13:00:52.576367 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:52.594767 master-0 kubenswrapper[4202]: I0223 13:00:52.594004 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.625879 master-0 kubenswrapper[4202]: I0223 13:00:52.625807 4202 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.635141 master-0 kubenswrapper[4202]: I0223 13:00:52.634131 4202 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:00:52.652358 master-0 kubenswrapper[4202]: I0223 13:00:52.650615 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:52.652358 master-0 kubenswrapper[4202]: I0223 13:00:52.652167 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:00:52.656491 master-0 kubenswrapper[4202]: I0223 13:00:52.654803 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:52.671370 master-0 kubenswrapper[4202]: I0223 13:00:52.668447 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:52.675380 master-0 kubenswrapper[4202]: I0223 13:00:52.672634 4202 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:00:52.693746 master-0 kubenswrapper[4202]: I0223 13:00:52.693707 4202 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:52.804950 master-0 kubenswrapper[4202]: I0223 13:00:52.804875 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:52.804950 master-0 kubenswrapper[4202]: I0223 13:00:52.804958 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.804950 master-0 kubenswrapper[4202]: I0223 13:00:52.804983 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.815154 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.815817 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.815754438 +0000 UTC m=+112.104616066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.816850 4202 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.816964 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.816937058 +0000 UTC m=+112.105798686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.817064 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.818053 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.818016945 +0000 UTC m=+112.106878573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.817278 4202 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: E0223 13:00:52.818105 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.818096557 +0000 UTC m=+112.106958175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: I0223 13:00:52.805034 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:52.821279 master-0 kubenswrapper[4202]: I0223 13:00:52.821224 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:52.822574 master-0 kubenswrapper[4202]: E0223 13:00:52.821486 4202 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:52.822574 master-0 kubenswrapper[4202]: E0223 13:00:52.821636 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.821604255 +0000 UTC m=+112.110465883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:52.923150 master-0 kubenswrapper[4202]: E0223 13:00:52.922592 4202 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:52.923384 master-0 kubenswrapper[4202]: E0223 13:00:52.923198 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.923171488 +0000 UTC m=+112.212033116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:52.923384 master-0 kubenswrapper[4202]: I0223 13:00:52.923076 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:52.923506 master-0 kubenswrapper[4202]: I0223 13:00:52.923388 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:52.923506 master-0 kubenswrapper[4202]: I0223 13:00:52.923418 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:52.923506 master-0 kubenswrapper[4202]: I0223 13:00:52.923455 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:52.923747 master-0 kubenswrapper[4202]: E0223 13:00:52.923695 4202 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:52.923804 master-0 kubenswrapper[4202]: E0223 13:00:52.923788 4202 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:52.923843 master-0 kubenswrapper[4202]: E0223 13:00:52.923804 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.923774242 +0000 UTC m=+112.212635870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:52.923843 master-0 kubenswrapper[4202]: E0223 13:00:52.923823 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.923813823 +0000 UTC m=+112.212675451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:52.923980 master-0 kubenswrapper[4202]: E0223 13:00:52.923948 4202 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:52.924022 master-0 kubenswrapper[4202]: E0223 13:00:52.923989 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:53.923980467 +0000 UTC m=+112.212842095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:53.032317 master-0 kubenswrapper[4202]: I0223 13:00:53.032257 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45"] Feb 23 13:00:53.045582 master-0 kubenswrapper[4202]: W0223 13:00:53.045519 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71cb2f21_6d27_411f_9c2f_d5fa286895a7.slice/crio-08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d WatchSource:0}: Error finding container 08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d: Status 404 returned error can't find the container with id 08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d Feb 23 13:00:53.055666 master-0 kubenswrapper[4202]: I0223 13:00:53.055620 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj"] Feb 23 13:00:53.081574 master-0 kubenswrapper[4202]: I0223 13:00:53.080763 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g"] Feb 23 13:00:53.085010 master-0 kubenswrapper[4202]: I0223 13:00:53.084951 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" event={"ID":"71cb2f21-6d27-411f-9c2f-d5fa286895a7","Type":"ContainerStarted","Data":"08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d"} Feb 23 13:00:53.092332 master-0 kubenswrapper[4202]: I0223 13:00:53.088255 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p"] Feb 23 13:00:53.092332 master-0 kubenswrapper[4202]: W0223 13:00:53.089767 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d0a976c_1492_4989_a5ff_e386564dd6ba.slice/crio-0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa WatchSource:0}: Error finding container 0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa: Status 404 returned error can't find the container with id 0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa Feb 23 13:00:53.092332 master-0 kubenswrapper[4202]: I0223 13:00:53.090525 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qg27h" event={"ID":"b8bdbf92-61e3-41e9-a48d-4259cee80e9f","Type":"ContainerStarted","Data":"482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690"} Feb 23 13:00:53.092985 master-0 kubenswrapper[4202]: I0223 13:00:53.092937 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" event={"ID":"f348bffa-b2f6-4695-88a7-923625e7fb02","Type":"ContainerStarted","Data":"86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13"} Feb 23 13:00:53.104017 master-0 kubenswrapper[4202]: I0223 13:00:53.103926 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4"] Feb 23 13:00:53.104992 master-0 kubenswrapper[4202]: W0223 13:00:53.104854 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9d6485_cf67_49c5_99c1_b8582a0bab70.slice/crio-9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb WatchSource:0}: Error finding container 9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb: Status 404 returned error can't find the container with id 9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb Feb 23 13:00:53.106403 master-0 kubenswrapper[4202]: I0223 13:00:53.106299 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz"] Feb 23 13:00:53.108256 master-0 kubenswrapper[4202]: I0223 13:00:53.107435 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z"] Feb 23 13:00:53.108785 master-0 kubenswrapper[4202]: W0223 13:00:53.108724 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71885db_c29e_429a_aa1f_1c274796a69f.slice/crio-ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7 WatchSource:0}: Error finding container ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7: Status 404 returned error can't find the container with id ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7 Feb 23 13:00:53.114675 master-0 kubenswrapper[4202]: W0223 13:00:53.114633 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba96760d_c6aa_4d7d_be5d_9a7e7cb549c9.slice/crio-75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2 WatchSource:0}: Error finding container 75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2: Status 404 returned error can't find the container with id 75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2 Feb 23 13:00:53.115404 master-0 kubenswrapper[4202]: W0223 13:00:53.115361 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29126ab2_a689_4b0e_a1f4_4faed19b0fbc.slice/crio-54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8 WatchSource:0}: Error finding container 54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8: Status 404 returned error can't find the container with id 54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8 Feb 23 13:00:53.177039 master-0 kubenswrapper[4202]: I0223 13:00:53.176970 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr"] Feb 23 13:00:53.178061 master-0 kubenswrapper[4202]: I0223 13:00:53.178024 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld"] Feb 23 13:00:53.179178 master-0 kubenswrapper[4202]: I0223 13:00:53.179123 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt"] Feb 23 13:00:53.184425 master-0 kubenswrapper[4202]: W0223 13:00:53.184367 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eab6dca_0e70_49d3_9e4b_e5dba46c0a1a.slice/crio-d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923 WatchSource:0}: Error finding container d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923: Status 404 returned error can't find the container with id d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923 Feb 23 13:00:53.187369 master-0 kubenswrapper[4202]: I0223 13:00:53.187291 4202 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4"] Feb 23 13:00:53.188429 master-0 kubenswrapper[4202]: W0223 13:00:53.188376 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3daf0176_92e7_4642_8643_4afbefb77235.slice/crio-4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a WatchSource:0}: Error finding container 4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a: Status 404 returned error can't find the container with id 4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a Feb 23 13:00:53.197468 master-0 kubenswrapper[4202]: W0223 13:00:53.197415 4202 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b02d3c_f671_4850_8c6e_315044a1376c.slice/crio-ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b WatchSource:0}: Error finding container ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b: Status 404 returned error can't find the container with id ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b Feb 23 13:00:53.839371 master-0 kubenswrapper[4202]: I0223 13:00:53.839267 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:53.839674 master-0 kubenswrapper[4202]: I0223 13:00:53.839362 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:53.839674 master-0 kubenswrapper[4202]: I0223 13:00:53.839505 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:53.839674 master-0 kubenswrapper[4202]: I0223 13:00:53.839543 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:53.839674 master-0 kubenswrapper[4202]: I0223 13:00:53.839570 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:53.839858 master-0 kubenswrapper[4202]: E0223 13:00:53.839669 4202 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:53.839858 master-0 kubenswrapper[4202]: E0223 13:00:53.839773 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:53.839944 master-0 kubenswrapper[4202]: E0223 13:00:53.839899 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.839845824 +0000 UTC m=+114.128707652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:53.839997 master-0 kubenswrapper[4202]: E0223 13:00:53.839950 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.839938387 +0000 UTC m=+114.128800255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:53.839997 master-0 kubenswrapper[4202]: E0223 13:00:53.839960 4202 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:53.840088 master-0 kubenswrapper[4202]: E0223 13:00:53.840005 4202 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:53.840088 master-0 kubenswrapper[4202]: E0223 13:00:53.840029 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.840008818 +0000 UTC m=+114.128870446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:53.840088 master-0 kubenswrapper[4202]: E0223 13:00:53.840050 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.840042059 +0000 UTC m=+114.128903687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:53.840229 master-0 kubenswrapper[4202]: E0223 13:00:53.840161 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:53.840266 master-0 kubenswrapper[4202]: E0223 13:00:53.840232 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.840222904 +0000 UTC m=+114.129084532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:53.942308 master-0 kubenswrapper[4202]: I0223 13:00:53.942246 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:53.942308 master-0 kubenswrapper[4202]: I0223 13:00:53.942325 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:53.942657 master-0 kubenswrapper[4202]: E0223 13:00:53.942571 4202 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:53.942657 master-0 kubenswrapper[4202]: I0223 13:00:53.942563 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:53.942792 master-0 kubenswrapper[4202]: E0223 13:00:53.942723 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.942681388 +0000 UTC m=+114.231543076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:53.943136 master-0 kubenswrapper[4202]: I0223 13:00:53.943085 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:53.943207 master-0 kubenswrapper[4202]: E0223 13:00:53.943063 4202 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:53.943258 master-0 kubenswrapper[4202]: E0223 13:00:53.943233 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.943209461 +0000 UTC m=+114.232071089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:53.943640 master-0 kubenswrapper[4202]: E0223 13:00:53.943484 4202 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:53.943776 master-0 kubenswrapper[4202]: E0223 13:00:53.943757 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.943714654 +0000 UTC m=+114.232576302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:53.943962 master-0 kubenswrapper[4202]: E0223 13:00:53.943929 4202 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:53.944016 master-0 kubenswrapper[4202]: E0223 13:00:53.943975 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:55.943965771 +0000 UTC m=+114.232827399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:54.099263 master-0 kubenswrapper[4202]: I0223 13:00:54.099117 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" event={"ID":"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a","Type":"ContainerStarted","Data":"3d99d0c2bd6be47ab909ce0f360a9cd7297541119cb33654550886e7ec757dd2"} Feb 23 13:00:54.099263 master-0 kubenswrapper[4202]: I0223 13:00:54.099187 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" event={"ID":"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a","Type":"ContainerStarted","Data":"d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923"} Feb 23 13:00:54.100695 master-0 kubenswrapper[4202]: I0223 13:00:54.100643 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" event={"ID":"4b9d6485-cf67-49c5-99c1-b8582a0bab70","Type":"ContainerStarted","Data":"9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb"} Feb 23 13:00:54.102503 master-0 kubenswrapper[4202]: I0223 13:00:54.102456 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" event={"ID":"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9","Type":"ContainerStarted","Data":"75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2"} Feb 23 13:00:54.104143 master-0 kubenswrapper[4202]: I0223 13:00:54.104089 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" event={"ID":"7d0a976c-1492-4989-a5ff-e386564dd6ba","Type":"ContainerStarted","Data":"0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa"} Feb 23 13:00:54.105580 master-0 kubenswrapper[4202]: I0223 13:00:54.105544 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerStarted","Data":"ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7"} Feb 23 13:00:54.107116 master-0 kubenswrapper[4202]: I0223 13:00:54.106987 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerStarted","Data":"ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b"} Feb 23 13:00:54.109935 master-0 kubenswrapper[4202]: I0223 13:00:54.109903 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" event={"ID":"3a6b0d84-a344-43e4-b9c4-c8e0670528de","Type":"ContainerStarted","Data":"cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f"} Feb 23 13:00:54.113277 master-0 kubenswrapper[4202]: I0223 13:00:54.113246 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" event={"ID":"3daf0176-92e7-4642-8643-4afbefb77235","Type":"ContainerStarted","Data":"4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a"} Feb 23 13:00:54.114527 master-0 kubenswrapper[4202]: I0223 13:00:54.114499 4202 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerStarted","Data":"54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8"} Feb 23 13:00:54.117718 master-0 kubenswrapper[4202]: I0223 13:00:54.117629 4202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" podStartSLOduration=70.117605787 podStartE2EDuration="1m10.117605787s" podCreationTimestamp="2026-02-23 12:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:54.116898979 +0000 UTC m=+112.405760607" watchObservedRunningTime="2026-02-23 13:00:54.117605787 +0000 UTC m=+112.406467425" Feb 23 13:00:54.544899 master-0 kubenswrapper[4202]: I0223 13:00:54.544836 4202 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:00:55.874428 master-0 kubenswrapper[4202]: I0223 13:00:55.874284 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:55.874428 master-0 kubenswrapper[4202]: I0223 13:00:55.874380 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: I0223 13:00:55.874516 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874527 4202 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874635 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.874606759 +0000 UTC m=+118.163468387 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874767 4202 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: I0223 13:00:55.874846 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874889 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.874860975 +0000 UTC m=+118.163722613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874954 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.874983 4202 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: I0223 13:00:55.875005 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.875028 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.875002579 +0000 UTC m=+118.163864207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.875050 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.87504234 +0000 UTC m=+118.163903968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.875184 4202 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:55.876441 master-0 kubenswrapper[4202]: E0223 13:00:55.875255 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.875240665 +0000 UTC m=+118.164102303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:55.977316 master-0 kubenswrapper[4202]: I0223 13:00:55.976819 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:55.977537 master-0 kubenswrapper[4202]: E0223 13:00:55.977035 4202 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:55.977537 master-0 kubenswrapper[4202]: E0223 13:00:55.977470 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.977439933 +0000 UTC m=+118.266301561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:55.977537 master-0 kubenswrapper[4202]: E0223 13:00:55.977497 4202 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:55.977660 master-0 kubenswrapper[4202]: E0223 13:00:55.977565 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.977543586 +0000 UTC m=+118.266405214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:55.977660 master-0 kubenswrapper[4202]: I0223 13:00:55.977367 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:55.977660 master-0 kubenswrapper[4202]: I0223 13:00:55.977604 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:55.977660 master-0 kubenswrapper[4202]: I0223 13:00:55.977641 4202 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:55.977778 master-0 kubenswrapper[4202]: E0223 13:00:55.977738 4202 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:55.977778 master-0 kubenswrapper[4202]: E0223 13:00:55.977760 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.977753602 +0000 UTC m=+118.266615230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:55.977838 master-0 kubenswrapper[4202]: E0223 13:00:55.977798 4202 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:55.977838 master-0 kubenswrapper[4202]: E0223 13:00:55.977817 4202 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.977810803 +0000 UTC m=+118.266672431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:57.127541 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 13:00:57.153604 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 23 13:00:57.153915 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 13:00:57.160255 master-0 systemd[1]: kubelet.service: Consumed 10.920s CPU time. Feb 23 13:00:57.185791 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 13:00:57.364858 master-0 kubenswrapper[7784]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:00:57.366325 master-0 kubenswrapper[7784]: I0223 13:00:57.365022 7784 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 13:00:57.369297 master-0 kubenswrapper[7784]: W0223 13:00:57.369271 7784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:00:57.369297 master-0 kubenswrapper[7784]: W0223 13:00:57.369295 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369322 7784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369331 7784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369365 7784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369372 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369377 7784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369383 7784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369388 7784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369400 7784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369405 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:00:57.369406 master-0 kubenswrapper[7784]: W0223 13:00:57.369411 7784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369418 7784 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369448 7784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369454 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369459 7784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369464 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369471 7784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369477 7784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369484 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369489 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369495 7784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369500 7784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369525 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369531 7784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369536 7784 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369543 7784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369549 7784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369555 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369560 7784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:00:57.369654 master-0 kubenswrapper[7784]: W0223 13:00:57.369566 7784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369571 7784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369577 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369583 7784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369588 7784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369593 7784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369598 7784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369603 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369608 7784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369613 7784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369618 7784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369624 7784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369656 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369665 7784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369672 7784 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369681 7784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369687 7784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369693 7784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369698 7784 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369703 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:00:57.370221 master-0 kubenswrapper[7784]: W0223 13:00:57.369709 7784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369740 7784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369747 7784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369753 7784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369758 7784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369764 7784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369770 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369775 7784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369780 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369785 7784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369791 7784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369820 7784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369826 7784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369831 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369838 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369843 7784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369849 7784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369855 7784 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369860 7784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369865 7784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:00:57.370727 master-0 kubenswrapper[7784]: W0223 13:00:57.369870 7784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: W0223 13:00:57.369875 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370090 7784 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370104 7784 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370112 7784 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370120 7784 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370127 7784 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370156 7784 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370166 7784 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370173 7784 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370179 7784 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370185 7784 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370191 7784 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370197 7784 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370202 7784 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370208 7784 flags.go:64] FLAG: --cgroup-root="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370214 7784 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370220 7784 flags.go:64] FLAG: --client-ca-file="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370226 7784 flags.go:64] FLAG: --cloud-config="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370232 7784 flags.go:64] FLAG: --cloud-provider="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370238 7784 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370247 7784 flags.go:64] FLAG: --cluster-domain="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370254 7784 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370260 7784 flags.go:64] FLAG: --config-dir="" Feb 23 13:00:57.371200 master-0 kubenswrapper[7784]: I0223 13:00:57.370265 7784 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370293 7784 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370303 7784 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370310 7784 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370315 7784 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370322 7784 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370327 7784 flags.go:64] FLAG: --contention-profiling="false" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370333 7784 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370368 7784 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370375 7784 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370380 7784 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370392 7784 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370397 7784 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370403 7784 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370409 7784 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370415 7784 flags.go:64] FLAG: --enable-server="true" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370441 7784 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370450 7784 flags.go:64] FLAG: --event-burst="100" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370456 7784 flags.go:64] FLAG: --event-qps="50" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370466 7784 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370473 7784 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370478 7784 flags.go:64] FLAG: --eviction-hard="" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370486 7784 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370491 7784 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370498 7784 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 13:00:57.371812 master-0 kubenswrapper[7784]: I0223 13:00:57.370504 7784 flags.go:64] FLAG: --eviction-soft="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370509 7784 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370515 7784 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370520 7784 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370526 7784 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370532 7784 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370537 7784 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370542 7784 flags.go:64] FLAG: --feature-gates="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370549 7784 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370555 7784 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370561 7784 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370567 7784 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370573 7784 flags.go:64] FLAG: --healthz-port="10248" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370578 7784 flags.go:64] FLAG: --help="false" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370584 7784 flags.go:64] FLAG: --hostname-override="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370590 7784 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370595 7784 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370601 7784 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370607 7784 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370613 7784 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370619 7784 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370624 7784 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370630 7784 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370635 7784 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370641 7784 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 13:00:57.372480 master-0 kubenswrapper[7784]: I0223 13:00:57.370648 7784 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370654 7784 flags.go:64] FLAG: --kube-reserved="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370660 7784 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370666 7784 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370672 7784 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370679 7784 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370685 7784 flags.go:64] FLAG: --lock-file="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370691 7784 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370696 7784 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370703 7784 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370712 7784 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370718 7784 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370724 7784 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370730 7784 flags.go:64] FLAG: --logging-format="text" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370736 7784 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370744 7784 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370759 7784 flags.go:64] FLAG: --manifest-url="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370766 7784 flags.go:64] FLAG: --manifest-url-header="" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370774 7784 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370780 7784 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370787 7784 flags.go:64] FLAG: --max-pods="110" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370793 7784 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370800 7784 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370806 7784 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370812 7784 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 13:00:57.373079 master-0 kubenswrapper[7784]: I0223 13:00:57.370818 7784 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370824 7784 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370830 7784 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370843 7784 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370849 7784 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370854 7784 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370860 7784 flags.go:64] FLAG: --pod-cidr="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370865 7784 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370874 7784 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370880 7784 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370885 7784 flags.go:64] FLAG: --pods-per-core="0" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370891 7784 flags.go:64] FLAG: --port="10250" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370897 7784 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370902 7784 flags.go:64] FLAG: --provider-id="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370908 7784 flags.go:64] FLAG: --qos-reserved="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370914 7784 flags.go:64] FLAG: --read-only-port="10255" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370920 7784 flags.go:64] FLAG: --register-node="true" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370926 7784 flags.go:64] FLAG: --register-schedulable="true" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370932 7784 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370941 7784 flags.go:64] FLAG: --registry-burst="10" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370947 7784 flags.go:64] FLAG: --registry-qps="5" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370953 7784 flags.go:64] FLAG: --reserved-cpus="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370958 7784 flags.go:64] FLAG: --reserved-memory="" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370965 7784 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 13:00:57.373938 master-0 kubenswrapper[7784]: I0223 13:00:57.370972 7784 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.370978 7784 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.370984 7784 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.370991 7784 flags.go:64] FLAG: --runonce="false" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.370997 7784 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371003 7784 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371009 7784 flags.go:64] FLAG: --seccomp-default="false" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371015 7784 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371021 7784 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371027 7784 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371033 7784 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371039 7784 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371046 7784 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371053 7784 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371059 7784 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371064 7784 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371070 7784 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371076 7784 flags.go:64] FLAG: --system-cgroups="" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371082 7784 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371091 7784 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371096 7784 flags.go:64] FLAG: --tls-cert-file="" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371102 7784 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371109 7784 flags.go:64] FLAG: --tls-min-version="" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371115 7784 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371121 7784 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 13:00:57.374851 master-0 kubenswrapper[7784]: I0223 13:00:57.371127 7784 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371133 7784 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371138 7784 flags.go:64] FLAG: --v="2" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371147 7784 flags.go:64] FLAG: --version="false" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371155 7784 flags.go:64] FLAG: --vmodule="" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371162 7784 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: I0223 13:00:57.371168 7784 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371296 7784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371304 7784 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371310 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371316 7784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371321 7784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371326 7784 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371331 7784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371361 7784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371366 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371372 7784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371379 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371385 7784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371391 7784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371396 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:00:57.375608 master-0 kubenswrapper[7784]: W0223 13:00:57.371401 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371406 7784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371411 7784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371416 7784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371421 7784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371426 7784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371431 7784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371436 7784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371441 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371446 7784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371451 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371455 7784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371460 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371465 7784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371470 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371475 7784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371479 7784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371484 7784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371490 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371495 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:00:57.376309 master-0 kubenswrapper[7784]: W0223 13:00:57.371503 7784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371508 7784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371513 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371518 7784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371525 7784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371531 7784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371536 7784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371542 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371546 7784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371552 7784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371557 7784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371562 7784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371566 7784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371571 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371576 7784 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371581 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371586 7784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371592 7784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371598 7784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:00:57.377080 master-0 kubenswrapper[7784]: W0223 13:00:57.371603 7784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371608 7784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371613 7784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371618 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371623 7784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371651 7784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371658 7784 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371663 7784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371670 7784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371676 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371682 7784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371688 7784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371694 7784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371702 7784 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371706 7784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371734 7784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371739 7784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371744 7784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:00:57.377733 master-0 kubenswrapper[7784]: W0223 13:00:57.371749 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:00:57.378293 master-0 kubenswrapper[7784]: I0223 13:00:57.371757 7784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:00:57.383437 master-0 kubenswrapper[7784]: I0223 13:00:57.383295 7784 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 13:00:57.383625 master-0 kubenswrapper[7784]: I0223 13:00:57.383497 7784 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 13:00:57.383702 master-0 kubenswrapper[7784]: W0223 13:00:57.383666 7784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:00:57.383702 master-0 kubenswrapper[7784]: W0223 13:00:57.383688 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:00:57.383702 master-0 kubenswrapper[7784]: W0223 13:00:57.383694 7784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:00:57.383702 master-0 kubenswrapper[7784]: W0223 13:00:57.383701 7784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:00:57.383702 master-0 kubenswrapper[7784]: W0223 13:00:57.383708 7784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383716 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383725 7784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383733 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383740 7784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383744 7784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383750 7784 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383758 7784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383767 7784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383773 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383779 7784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383785 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383790 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383794 7784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383799 7784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383803 7784 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383809 7784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383814 7784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383819 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:00:57.383848 master-0 kubenswrapper[7784]: W0223 13:00:57.383825 7784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383830 7784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383835 7784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383841 7784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383845 7784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383851 7784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383861 7784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383868 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383876 7784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383884 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383891 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383897 7784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383903 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383908 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383913 7784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383920 7784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383926 7784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383932 7784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383937 7784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:00:57.384389 master-0 kubenswrapper[7784]: W0223 13:00:57.383942 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.383950 7784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.383956 7784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.383963 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.383993 7784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.383999 7784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384005 7784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384010 7784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384016 7784 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384021 7784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384026 7784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384031 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384037 7784 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384042 7784 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384047 7784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384052 7784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384057 7784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384061 7784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384066 7784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384071 7784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:00:57.384875 master-0 kubenswrapper[7784]: W0223 13:00:57.384076 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384081 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384086 7784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384091 7784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384096 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384101 7784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384106 7784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384111 7784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384116 7784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384121 7784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: I0223 13:00:57.384131 7784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384416 7784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384431 7784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384436 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384441 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:00:57.385444 master-0 kubenswrapper[7784]: W0223 13:00:57.384447 7784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384452 7784 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384459 7784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384464 7784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384469 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384474 7784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384478 7784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384482 7784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384487 7784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384494 7784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384499 7784 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384504 7784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384509 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384514 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384519 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384525 7784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384532 7784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384537 7784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384541 7784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384565 7784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:00:57.385809 master-0 kubenswrapper[7784]: W0223 13:00:57.384570 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384574 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384579 7784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384583 7784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384588 7784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384592 7784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384597 7784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384601 7784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384605 7784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384609 7784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384613 7784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384617 7784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384622 7784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384628 7784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384636 7784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384643 7784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384649 7784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384655 7784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384661 7784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:00:57.386276 master-0 kubenswrapper[7784]: W0223 13:00:57.384666 7784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384670 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384679 7784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384683 7784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384689 7784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384693 7784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384698 7784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384703 7784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384707 7784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384712 7784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384716 7784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384721 7784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384725 7784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384731 7784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384738 7784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384743 7784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384748 7784 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384753 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384758 7784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384762 7784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:00:57.387175 master-0 kubenswrapper[7784]: W0223 13:00:57.384767 7784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384772 7784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384777 7784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384782 7784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384788 7784 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384792 7784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384797 7784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384805 7784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: W0223 13:00:57.384810 7784 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: I0223 13:00:57.384817 7784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:00:57.387840 master-0 kubenswrapper[7784]: I0223 13:00:57.385091 7784 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 13:00:57.388484 master-0 kubenswrapper[7784]: I0223 13:00:57.388430 7784 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 13:00:57.388739 master-0 kubenswrapper[7784]: I0223 13:00:57.388704 7784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 13:00:57.389143 master-0 kubenswrapper[7784]: I0223 13:00:57.389110 7784 server.go:997] "Starting client certificate rotation" Feb 23 13:00:57.389143 master-0 kubenswrapper[7784]: I0223 13:00:57.389133 7784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 13:00:57.389424 master-0 kubenswrapper[7784]: I0223 13:00:57.389316 7784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 07:54:03.646979974 +0000 UTC Feb 23 13:00:57.389424 master-0 kubenswrapper[7784]: I0223 13:00:57.389412 7784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h53m6.257570745s for next certificate rotation Feb 23 13:00:57.390419 master-0 kubenswrapper[7784]: I0223 13:00:57.390377 7784 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:00:57.392419 master-0 kubenswrapper[7784]: I0223 13:00:57.392368 7784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:00:57.395865 master-0 kubenswrapper[7784]: I0223 13:00:57.395829 7784 log.go:25] "Validated CRI v1 runtime API" Feb 23 13:00:57.399468 master-0 kubenswrapper[7784]: I0223 13:00:57.399286 7784 log.go:25] "Validated CRI v1 image API" Feb 23 13:00:57.401151 master-0 kubenswrapper[7784]: I0223 13:00:57.401091 7784 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 13:00:57.405933 master-0 kubenswrapper[7784]: I0223 13:00:57.405877 7784 fs.go:135] Filesystem UUIDs: map[2d6160db-474a-49c3-9ea7-0693d391532e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 23 13:00:57.406203 master-0 kubenswrapper[7784]: I0223 13:00:57.405912 7784 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm major:0 minor:106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm major:0 minor:163 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm major:0 minor:124 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm major:0 minor:154 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh:{mountpoint:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5:{mountpoint:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5 major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7:{mountpoint:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7 major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b0122c7-1407-4a35-afcc-2c6b1225e830/volumes/kubernetes.io~projected/kube-api-access-cw97s:{mountpoint:/var/lib/kubelet/pods/1b0122c7-1407-4a35-afcc-2c6b1225e830/volumes/kubernetes.io~projected/kube-api-access-cw97s major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc:{mountpoint:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb:{mountpoint:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh:{mountpoint:/var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf major:0 minor:149 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7:{mountpoint:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7 major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg:{mountpoint:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl:{mountpoint:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg:{mountpoint:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v:{mountpoint:/var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4:{mountpoint:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4 major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a04058be-6928-48c4-a71e-bd9e6427c097/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a04058be-6928-48c4-a71e-bd9e6427c097/volumes/kubernetes.io~projected/kube-api-access major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg:{mountpoint:/var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj:{mountpoint:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs:{mountpoint:/var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc:{mountpoint:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw:{mountpoint:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx:{mountpoint:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj:{mountpoint:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82:{mountpoint:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9:{mountpoint:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9 major:0 minor:263 fsType:tmpfs blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/d65fbd6235ee7398bd9bd80325890e6d9db9042867ebd34eb1995bded03f724a/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/100044aa2d6e8e95aa40d15bafe14fe3fa98658b200942150f904d1b413dbef4/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/898d34fb874fb3a9b0718a0e075fb3f716dfa3f72b2c2e23c733e0981c25e406/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/5411f4cad8b0aa785e9660f69a66a74abd4d7529d29a3824452ab812f6ca2cb5/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/b99dbdaed4cf876b57c431924a7b08cbd5b7290889677699bba76f1e5e9e8270/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/a9f4a5255a3259800c181eff442f6acd994a76e3aa4f615947d7459b16378b65/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/c7656afe1b20e18d5288902d38f839dd909a524e24203d2ee54de707707cb2a6/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/3a4ad25d55be29f49775856000e9b90bd9f86cd98f412abde72f1f5e4fe0699d/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-135:{mountpoint:/var/lib/containers/storage/overlay/ac424a0c3a7b645ac4cf26f4a7240202ca648fb839afda7312f214efe16fd9f8/merged major:0 minor:135 fsType:overlay blockSize:0} overlay_0-141:{mountpoint:/var/lib/containers/storage/overlay/f83964a7a0817a72dc36fa7428976bb66d625237e0dcde547b1da37e237c5852/merged major:0 minor:141 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/e831bd5837c26433c63cec3d865618cc90ab47de6c44e0873a8c1e49f6413930/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/fcbab3a1ad1870856784c1474c4be9f1cef47919d42e722e3252be750ee2245d/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/284fde59459793658254765a86793f8df3abacd1e864699d3fe7577b97f24eb6/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/53bbb5487bb7f2fd5e2d92844533bd1ba0d4c564e1408e8530368151157c7ebe/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/15bb74efab882f47c2a833c93377ed88cb22cf15d99092be7a07123d752aa043/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/7e7023da0419b1d3fbf63a8350d877d5a2f471b88c6761ed94d871e308e469b0/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/3ce9f9af3400d5278700758e11fc9188b7d0b7fd961f6e076db9cfb05842b61f/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-173:{mountpoint:/var/lib/containers/storage/overlay/725be67c61d4bff0cf7bf68310347b93041535ab704e2312d3dc0702e5bc80fa/merged major:0 minor:173 fsType:overlay blockSize:0} overlay_0-175:{mountpoint:/var/lib/containers/storage/overlay/bd4ed6a73b982df2f1cd7ac9fa2e3106f50f0f1b612a634cd8e428bf9307649f/merged major:0 minor:175 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/46fbafc27cda37c1d6121fc5070b8497ca0d561c0ee3544945404e692bd2babf/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/1dc296e501556ba72c419e2e967afe734057055c37bfbd667c068e397618d8f8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/166286a56608598546dd892fdcfd6c90d941d0a1a26bf3fd23476af155393f26/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/d79b53de133a90950d7696d433543d6f447bd71d307d203bb80b6ea91bb253ec/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/e7e7c74684e0a0f169663dcc21309bff70d71d0847a24a822fbc3d5424405c80/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/635c2d42f5f022497c3a3b72b39398ee3ebc58c504dc9a054610928679bd91de/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/787019c6ce4792917114ed08730652ba0a3d59f2921aaf4905890676e51139e6/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/ce411febe08e408b7c2cc8f963cf4922baab54d2b4d66b51d116999fe3ad4f7e/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-212:{mountpoint:/var/lib/containers/storage/overlay/dff4d90bb71782654696862704a0ee6d348fddb035957b9fe5aec86fd4abf6de/merged major:0 minor:212 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/908cebb2def07eed91c0bf4a726ccdea02adc6d09b8504b1550afb9c3ef97f4d/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/4bc14fb5e5910fe47c4cb7aea0d8da81175ec081040a95db9bed3f176d4c3503/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/29a25f0cb6160f73a51fe4c4ccc44c13d78d2569e7a0da3eee462b1dbdd535ce/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/9c95e6d6a6052a306855957a860f5323e463a0ff027618961abc8a873731ad7e/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/2d7f02e85ce5200f29ce23e0e7d3e81d1477b98eca971f7d9b7215578afa6756/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/9c897047166fb93121aa760ca43ac54762bffebbad52c5e30749a3434f47a18f/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/64514c23c1c96d40494750021b1c98a164bfda23e7aa698d91257a7fb4936009/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/085e9a5ba152eedc0c0b15809f81a44ac99acc2091b58b7d884c80b145af536c/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/1bd158c627df8ffcf9db0eb116ef9dee2ea0f0f87d877cf2961cf2721dea4f9c/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/da744e864a0a961ce6a1e0fae17ebf24ed7fef629cf40f55b8e990f2e677d441/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/2281c9447135ac7d9d49cfcb6a02b61d986af3a41e676148b3325ede9027e8ad/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/6b3a831e0eb23f9df1dfdbb56e9e8fb817d1ec3f88770d10f6c96c11915c92c0/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/49b57343b2ad4021a48c3f5edd3b9687654eedb46d7a27f5f0fcda1d48500531/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/84c934eacf9a93ddcdf0b8db55fb9c9c5befd0222bc6688df6ed152073341323/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/57c829ea56051513ee51c8c20e7501148bc17c7448fb60b382fe601a98b1a6a0/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/3efad935cdfe4928d8e0d1fc59cb8a84606208015b91f3c4aec35b104bf4fa96/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/bcaa02979a0811f14317ca40f18fad2302c404c3691c751d3632d06a07192a63/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/9249d89b67dc2914847bd745f1298278aae14ed3cafac533be2004e223f8788e/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/6cac6a9d07f8f7b6dafeeb15c088716f757e3ddfb6fbb66b8b23ad8952a90e98/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/469bd2e22d27f9bdd696aafac580b16f04793d6fa13f2e6b14ba9d4dd599654c/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/045e1efb9c73ee5c4cadd4881d1cbb2a086f6828a479da626bfe7e2a5c2e3c83/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/6dd68dd7aaf9cea8ca5eae20e7dbac484c3c8c0e471322ddb439f8efb0efb8af/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/3ed70ec57cb108ef68adcfdc1765cd1ec08aa702774b1977984b9edca418ba51/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/5b9542ceb745ac11a9f0e42b7907dadcaed4e3d5d3566416f4a1972b0da5fff0/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/3a83d1f0a69c42e689a8a954c91fd8260412ab63b277830c0d8628957e1f6fd2/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/83d6d513cc674a7112ea731b65d9de3f7e44c730c12de5e9ceb2d5ed4fbc251a/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/f51040575acee4425a2e3519dcc5ecded92a9606c63d05fbb6cc0fd8d0df0923/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/db6c0e84020affcf6d0e953ff499fc41d58daa14257ec9f4f170f468c8170b10/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/c2890741562c240006624474c6138476f6f0386aafad6ffb2ef7a03fd5f039ec/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/1f52fcaa66376f5ffbc07096fbb1372698ceb82a63ccf8f309c7e3e7d0247ccf/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/54826b266dfc8c48c87e6a9e6b520724c0cd7706d283fc1ba6b99a3e37ee5d66/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/1f96e6578dfcedb5379e6a6dbc17c6a6c632529390c94ea92f26581764ba92e4/merged major:0 minor:94 fsType:overlay blockSize:0}] Feb 23 13:00:57.437442 master-0 kubenswrapper[7784]: I0223 13:00:57.436395 7784 manager.go:217] Machine: {Timestamp:2026-02-23 13:00:57.43463617 +0000 UTC m=+0.169489853 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:813062fc9ff74d30ae5cd2159d83a791 SystemUUID:813062fc-9ff7-4d30-ae5c-d2159d83a791 BootID:4abb3f7a-5d3d-42f2-a9ae-25fe202cc7d3 Filesystems:[{Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-141 DeviceMajor:0 DeviceMinor:141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm DeviceMajor:0 DeviceMinor:154 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7 DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a04058be-6928-48c4-a71e-bd9e6427c097/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4 DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf DeviceMajor:0 DeviceMinor:149 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9 DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs DeviceMajor:0 DeviceMinor:102 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj DeviceMajor:0 DeviceMinor:130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm DeviceMajor:0 DeviceMinor:163 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:107 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm DeviceMajor:0 DeviceMinor:106 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-135 DeviceMajor:0 DeviceMinor:135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-212 DeviceMajor:0 DeviceMinor:212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v DeviceMajor:0 DeviceMinor:118 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b0122c7-1407-4a35-afcc-2c6b1225e830/volumes/kubernetes.io~projected/kube-api-access-cw97s DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7 DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-175 DeviceMajor:0 DeviceMinor:175 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5 DeviceMajor:0 DeviceMinor:108 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-173 DeviceMajor:0 DeviceMinor:173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm DeviceMajor:0 DeviceMinor:124 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:148 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:08edfd088420ec5 MacAddress:f2:01:dc:d2:fc:e9 Speed:10000 Mtu:8900} {Name:0bb71701a766cdf MacAddress:ce:42:f4:51:d3:2c Speed:10000 Mtu:8900} {Name:4133bbba4cf25e1 MacAddress:46:06:8a:81:e8:8f Speed:10000 Mtu:8900} {Name:54d3f0365402e28 MacAddress:b2:d9:11:e1:8a:1a Speed:10000 Mtu:8900} {Name:75945fe5446b395 MacAddress:e2:9c:87:d8:b5:37 Speed:10000 Mtu:8900} {Name:86bfbedca58264a MacAddress:ce:f4:b9:e8:55:16 Speed:10000 Mtu:8900} {Name:9959e2d6fc0e606 MacAddress:da:cc:c4:60:e2:2b Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:92:c2:5d:57:03 Speed:0 Mtu:8900} {Name:cfa9c4cdf55305b MacAddress:f2:d0:62:3f:d7:7f Speed:10000 Mtu:8900} {Name:d3d2ae481af3820 MacAddress:be:23:3d:8f:ff:64 Speed:10000 Mtu:8900} {Name:ee9a1940c33a806 MacAddress:c6:32:92:48:4c:0f Speed:10000 Mtu:8900} {Name:ef6e18a1f50bdcd MacAddress:7e:01:66:05:96:e8 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:69:55:75 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3c:f3:9f Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:de:1e:e7:99:8c:28 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 13:00:57.437442 master-0 kubenswrapper[7784]: I0223 13:00:57.437393 7784 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 13:00:57.438060 master-0 kubenswrapper[7784]: I0223 13:00:57.437686 7784 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 13:00:57.438247 master-0 kubenswrapper[7784]: I0223 13:00:57.438184 7784 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 13:00:57.438475 master-0 kubenswrapper[7784]: I0223 13:00:57.438402 7784 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 13:00:57.438925 master-0 kubenswrapper[7784]: I0223 13:00:57.438462 7784 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 13:00:57.439010 master-0 kubenswrapper[7784]: I0223 13:00:57.438945 7784 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 13:00:57.439010 master-0 kubenswrapper[7784]: I0223 13:00:57.438973 7784 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 13:00:57.439010 master-0 kubenswrapper[7784]: I0223 13:00:57.438985 7784 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:00:57.439010 master-0 kubenswrapper[7784]: I0223 13:00:57.439012 7784 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:00:57.439317 master-0 kubenswrapper[7784]: I0223 13:00:57.439283 7784 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:00:57.439483 master-0 kubenswrapper[7784]: I0223 13:00:57.439451 7784 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 13:00:57.439580 master-0 kubenswrapper[7784]: I0223 13:00:57.439561 7784 kubelet.go:418] "Attempting to sync node with API server" Feb 23 13:00:57.439617 master-0 kubenswrapper[7784]: I0223 13:00:57.439583 7784 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 13:00:57.439617 master-0 kubenswrapper[7784]: I0223 13:00:57.439607 7784 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 13:00:57.439706 master-0 kubenswrapper[7784]: I0223 13:00:57.439622 7784 kubelet.go:324] "Adding apiserver pod source" Feb 23 13:00:57.439706 master-0 kubenswrapper[7784]: I0223 13:00:57.439645 7784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 13:00:57.441276 master-0 kubenswrapper[7784]: I0223 13:00:57.441235 7784 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 13:00:57.441614 master-0 kubenswrapper[7784]: I0223 13:00:57.441588 7784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 13:00:57.442087 master-0 kubenswrapper[7784]: I0223 13:00:57.442061 7784 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 13:00:57.442372 master-0 kubenswrapper[7784]: I0223 13:00:57.442325 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 13:00:57.442443 master-0 kubenswrapper[7784]: I0223 13:00:57.442395 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 13:00:57.442443 master-0 kubenswrapper[7784]: I0223 13:00:57.442408 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 13:00:57.442443 master-0 kubenswrapper[7784]: I0223 13:00:57.442415 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 13:00:57.442443 master-0 kubenswrapper[7784]: I0223 13:00:57.442424 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442433 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442463 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442472 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442482 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442490 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442503 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442537 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 13:00:57.442580 master-0 kubenswrapper[7784]: I0223 13:00:57.442572 7784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 13:00:57.443277 master-0 kubenswrapper[7784]: I0223 13:00:57.443251 7784 server.go:1280] "Started kubelet" Feb 23 13:00:57.443617 master-0 kubenswrapper[7784]: I0223 13:00:57.443579 7784 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 13:00:57.444045 master-0 kubenswrapper[7784]: I0223 13:00:57.443905 7784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 13:00:57.444145 master-0 kubenswrapper[7784]: I0223 13:00:57.444105 7784 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 13:00:57.444800 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 13:00:57.445113 master-0 kubenswrapper[7784]: I0223 13:00:57.445046 7784 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 13:00:57.449720 master-0 kubenswrapper[7784]: I0223 13:00:57.449633 7784 server.go:449] "Adding debug handlers to kubelet server" Feb 23 13:00:57.451759 master-0 kubenswrapper[7784]: I0223 13:00:57.451711 7784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 13:00:57.451845 master-0 kubenswrapper[7784]: I0223 13:00:57.451763 7784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 13:00:57.452124 master-0 kubenswrapper[7784]: I0223 13:00:57.452079 7784 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 13:00:57.452124 master-0 kubenswrapper[7784]: I0223 13:00:57.452111 7784 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 13:00:57.452302 master-0 kubenswrapper[7784]: I0223 13:00:57.451957 7784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 10:06:41.809261753 +0000 UTC Feb 23 13:00:57.452452 master-0 kubenswrapper[7784]: I0223 13:00:57.452365 7784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h5m44.35697236s for next certificate rotation Feb 23 13:00:57.453020 master-0 kubenswrapper[7784]: I0223 13:00:57.452973 7784 factory.go:55] Registering systemd factory Feb 23 13:00:57.453020 master-0 kubenswrapper[7784]: I0223 13:00:57.453018 7784 factory.go:221] Registration of the systemd container factory successfully Feb 23 13:00:57.458535 master-0 kubenswrapper[7784]: I0223 13:00:57.458485 7784 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 13:00:57.460784 master-0 kubenswrapper[7784]: E0223 13:00:57.459552 7784 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:00:57.460784 master-0 kubenswrapper[7784]: I0223 13:00:57.460516 7784 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:00:57.462124 master-0 kubenswrapper[7784]: I0223 13:00:57.461234 7784 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:00:57.462780 master-0 kubenswrapper[7784]: I0223 13:00:57.462721 7784 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:00:57.462847 master-0 kubenswrapper[7784]: I0223 13:00:57.462794 7784 factory.go:153] Registering CRI-O factory Feb 23 13:00:57.462847 master-0 kubenswrapper[7784]: I0223 13:00:57.462827 7784 factory.go:221] Registration of the crio container factory successfully Feb 23 13:00:57.463007 master-0 kubenswrapper[7784]: I0223 13:00:57.462991 7784 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 13:00:57.463075 master-0 kubenswrapper[7784]: I0223 13:00:57.463064 7784 factory.go:103] Registering Raw factory Feb 23 13:00:57.463119 master-0 kubenswrapper[7784]: I0223 13:00:57.463104 7784 manager.go:1196] Started watching for new ooms in manager Feb 23 13:00:57.463788 master-0 kubenswrapper[7784]: I0223 13:00:57.463701 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib" seLinuxMountContext="" Feb 23 13:00:57.463788 master-0 kubenswrapper[7784]: I0223 13:00:57.463775 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v" seLinuxMountContext="" Feb 23 13:00:57.463788 master-0 kubenswrapper[7784]: I0223 13:00:57.463790 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463804 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463819 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463830 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463842 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b" volumeName="kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463853 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b9d6485-cf67-49c5-99c1-b8582a0bab70" volumeName="kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463867 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463879 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463890 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463903 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463916 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" volumeName="kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463929 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463942 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463955 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463967 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463979 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.463991 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.464002 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides" seLinuxMountContext="" Feb 23 13:00:57.463996 master-0 kubenswrapper[7784]: I0223 13:00:57.464015 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a04058be-6928-48c4-a71e-bd9e6427c097" volumeName="kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464028 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464041 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464055 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464067 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464080 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" volumeName="kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464094 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464109 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464120 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464131 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464143 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464202 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464217 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464230 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464242 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464255 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8bdbf92-61e3-41e9-a48d-4259cee80e9f" volumeName="kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464266 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464278 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464289 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464301 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464313 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464325 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464341 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b0122c7-1407-4a35-afcc-2c6b1225e830" volumeName="kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464365 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464377 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464390 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464407 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464420 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8bdbf92-61e3-41e9-a48d-4259cee80e9f" volumeName="kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464432 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464443 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" volumeName="kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464455 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35e97ed9-695d-483e-8878-4f231c79f1d2" volumeName="kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464467 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464486 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464500 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35e97ed9-695d-483e-8878-4f231c79f1d2" volumeName="kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464512 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9bed6748-374e-4d8a-92a0-36d7d735d6b7" volumeName="kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464525 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464537 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464548 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464560 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464571 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464583 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464595 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e941c759-ab95-4b30-a571-6c132ab0e639" volumeName="kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464606 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464618 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464632 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464645 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap" seLinuxMountContext="" Feb 23 13:00:57.464577 master-0 kubenswrapper[7784]: I0223 13:00:57.464658 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9bed6748-374e-4d8a-92a0-36d7d735d6b7" volumeName="kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464671 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464685 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464696 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464708 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464720 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464733 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464744 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464759 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464789 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464801 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a04058be-6928-48c4-a71e-bd9e6427c097" volumeName="kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464813 7784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config" seLinuxMountContext="" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464824 7784 reconstruct.go:97] "Volume reconstruction finished" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464833 7784 reconciler.go:26] "Reconciler: start to sync state" Feb 23 13:00:57.465882 master-0 kubenswrapper[7784]: I0223 13:00:57.464862 7784 manager.go:319] Starting recovery of all containers Feb 23 13:00:57.485189 master-0 kubenswrapper[7784]: I0223 13:00:57.484903 7784 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 13:00:57.507302 master-0 kubenswrapper[7784]: I0223 13:00:57.506891 7784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 13:00:57.513663 master-0 kubenswrapper[7784]: I0223 13:00:57.513630 7784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 13:00:57.513749 master-0 kubenswrapper[7784]: I0223 13:00:57.513683 7784 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 13:00:57.513749 master-0 kubenswrapper[7784]: I0223 13:00:57.513708 7784 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 13:00:57.513808 master-0 kubenswrapper[7784]: E0223 13:00:57.513750 7784 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 13:00:57.516580 master-0 kubenswrapper[7784]: I0223 13:00:57.516547 7784 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:00:57.531072 master-0 kubenswrapper[7784]: I0223 13:00:57.530954 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09" exitCode=1 Feb 23 13:00:57.535132 master-0 kubenswrapper[7784]: I0223 13:00:57.535077 7784 generic.go:334] "Generic (PLEG): container finished" podID="e0063130-dfb5-4907-a000-f023a77c6441" containerID="b055012e88ad61c2c4ff44365b26ade24e930d1fe63f02496d6b67176e6fe113" exitCode=0 Feb 23 13:00:57.548459 master-0 kubenswrapper[7784]: I0223 13:00:57.548411 7784 generic.go:334] "Generic (PLEG): container finished" podID="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" containerID="3dce0cc5f97bf43d2b56ee91d574aa374ea8564835a1d8988f603b6c0033063a" exitCode=0 Feb 23 13:00:57.550437 master-0 kubenswrapper[7784]: I0223 13:00:57.550401 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/3.log" Feb 23 13:00:57.550828 master-0 kubenswrapper[7784]: I0223 13:00:57.550783 7784 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" exitCode=1 Feb 23 13:00:57.550828 master-0 kubenswrapper[7784]: I0223 13:00:57.550821 7784 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563" exitCode=0 Feb 23 13:00:57.557587 master-0 kubenswrapper[7784]: I0223 13:00:57.557547 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4239f8be57b6158b6fa0698dec86bee3b9d4f017ada846bb7d788ccf7bd49862" exitCode=0 Feb 23 13:00:57.557587 master-0 kubenswrapper[7784]: I0223 13:00:57.557576 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="d00db22a72ea4aa1ec65791429e5f61e982e0efe4b37e51163034797dd496f23" exitCode=0 Feb 23 13:00:57.557587 master-0 kubenswrapper[7784]: I0223 13:00:57.557586 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4b8e974553a5805af4feb6c94d4d5c7568b29cb246442dd9b1691b86b9879742" exitCode=0 Feb 23 13:00:57.557587 master-0 kubenswrapper[7784]: I0223 13:00:57.557594 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="0e189812d4682599d3015af9393b7f83c9c7e758eb4c42ea44314281d98f5ef5" exitCode=0 Feb 23 13:00:57.557734 master-0 kubenswrapper[7784]: I0223 13:00:57.557604 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="de633df615dfa1f75b5da48e16feb9f5558220428b4dd98a89433d879af25256" exitCode=0 Feb 23 13:00:57.557734 master-0 kubenswrapper[7784]: I0223 13:00:57.557614 7784 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="c0cd8e6831fa2b7f0a83e05208d92bf5646225429df385d54f2e069a34fbf956" exitCode=0 Feb 23 13:00:57.559774 master-0 kubenswrapper[7784]: I0223 13:00:57.559735 7784 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0" exitCode=0 Feb 23 13:00:57.573714 master-0 kubenswrapper[7784]: I0223 13:00:57.573585 7784 generic.go:334] "Generic (PLEG): container finished" podID="2cc34173-350b-40a9-a164-e500e96caf74" containerID="1ac5db7a64f2f6a417b7aa444094f3b04d08a91a07d4cc6037194f4d5f089c43" exitCode=0 Feb 23 13:00:57.614002 master-0 kubenswrapper[7784]: E0223 13:00:57.613924 7784 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:00:57.631567 master-0 kubenswrapper[7784]: I0223 13:00:57.631395 7784 manager.go:324] Recovery completed Feb 23 13:00:57.669198 master-0 kubenswrapper[7784]: I0223 13:00:57.669118 7784 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 13:00:57.669198 master-0 kubenswrapper[7784]: I0223 13:00:57.669153 7784 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 13:00:57.669198 master-0 kubenswrapper[7784]: I0223 13:00:57.669179 7784 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:00:57.669643 master-0 kubenswrapper[7784]: I0223 13:00:57.669471 7784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 23 13:00:57.669643 master-0 kubenswrapper[7784]: I0223 13:00:57.669488 7784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 23 13:00:57.669643 master-0 kubenswrapper[7784]: I0223 13:00:57.669521 7784 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 23 13:00:57.669643 master-0 kubenswrapper[7784]: I0223 13:00:57.669531 7784 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 23 13:00:57.669643 master-0 kubenswrapper[7784]: I0223 13:00:57.669541 7784 policy_none.go:49] "None policy: Start" Feb 23 13:00:57.671793 master-0 kubenswrapper[7784]: I0223 13:00:57.671759 7784 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 13:00:57.671793 master-0 kubenswrapper[7784]: I0223 13:00:57.671789 7784 state_mem.go:35] "Initializing new in-memory state store" Feb 23 13:00:57.672024 master-0 kubenswrapper[7784]: I0223 13:00:57.671995 7784 state_mem.go:75] "Updated machine memory state" Feb 23 13:00:57.672024 master-0 kubenswrapper[7784]: I0223 13:00:57.672016 7784 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 23 13:00:57.693486 master-0 kubenswrapper[7784]: I0223 13:00:57.693384 7784 manager.go:334] "Starting Device Plugin manager" Feb 23 13:00:57.693630 master-0 kubenswrapper[7784]: I0223 13:00:57.693570 7784 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 13:00:57.693630 master-0 kubenswrapper[7784]: I0223 13:00:57.693601 7784 server.go:79] "Starting device plugin registration server" Feb 23 13:00:57.696267 master-0 kubenswrapper[7784]: I0223 13:00:57.696220 7784 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 13:00:57.696367 master-0 kubenswrapper[7784]: I0223 13:00:57.696259 7784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 13:00:57.696576 master-0 kubenswrapper[7784]: I0223 13:00:57.696514 7784 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 13:00:57.696834 master-0 kubenswrapper[7784]: I0223 13:00:57.696756 7784 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 13:00:57.696834 master-0 kubenswrapper[7784]: I0223 13:00:57.696799 7784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 13:00:57.797955 master-0 kubenswrapper[7784]: I0223 13:00:57.797850 7784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:00:57.799674 master-0 kubenswrapper[7784]: I0223 13:00:57.799604 7784 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:00:57.799789 master-0 kubenswrapper[7784]: I0223 13:00:57.799691 7784 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:00:57.799789 master-0 kubenswrapper[7784]: I0223 13:00:57.799711 7784 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:00:57.799933 master-0 kubenswrapper[7784]: I0223 13:00:57.799795 7784 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:00:57.809232 master-0 kubenswrapper[7784]: I0223 13:00:57.809162 7784 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 23 13:00:57.809403 master-0 kubenswrapper[7784]: I0223 13:00:57.809257 7784 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.814721 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815261 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815332 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815380 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815401 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815426 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510" Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815443 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815457 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815470 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"1baf8403b957130fe4a9ee4ed69aaae906a37f6c365a5fe5ce5b8bafc29d4a14"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815514 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"1145acd6528f641fe4dba004ec108b22fd6a9f58b87118602acd22f6be1e6680"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815533 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815552 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815699 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815752 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815772 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815789 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815811 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815841 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815859 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"0d03126ca1d84d609c963c370cc0003bfcc9d01813c6cef66106855300c98278"} Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815880 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065bc0d684a947cce0eca298f9885e12618ab1422a1e16b4309b431cef92d591" Feb 23 13:00:57.816494 master-0 kubenswrapper[7784]: I0223 13:00:57.815904 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f3e4b8ad89b1ad3094d7ed869a7d8b7a5de0449c5f774e500407712a8c5ce2" Feb 23 13:00:57.827372 master-0 kubenswrapper[7784]: E0223 13:00:57.827259 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.827620 master-0 kubenswrapper[7784]: W0223 13:00:57.827589 7784 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 13:00:57.827866 master-0 kubenswrapper[7784]: E0223 13:00:57.827638 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.827866 master-0 kubenswrapper[7784]: E0223 13:00:57.827654 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.827866 master-0 kubenswrapper[7784]: E0223 13:00:57.827649 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.832844 master-0 kubenswrapper[7784]: E0223 13:00:57.832797 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889090 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889178 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889221 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889258 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889317 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889362 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889386 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889405 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889427 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889450 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889544 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889642 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889689 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889709 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889729 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889752 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.890155 master-0 kubenswrapper[7784]: I0223 13:00:57.889782 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.989998 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.990048 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.990063 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.990078 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.990096 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990083 master-0 kubenswrapper[7784]: I0223 13:00:57.990116 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990134 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990151 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990166 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990182 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990198 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990211 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990225 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990240 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990292 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990308 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990324 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990419 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990492 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990655 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.990774 master-0 kubenswrapper[7784]: I0223 13:00:57.990740 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.990809 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.990853 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.990888 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.990921 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.990966 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991002 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991033 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991063 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991096 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991125 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991156 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991190 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:57.991815 master-0 kubenswrapper[7784]: I0223 13:00:57.991216 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:58.440405 master-0 kubenswrapper[7784]: I0223 13:00:58.440328 7784 apiserver.go:52] "Watching apiserver" Feb 23 13:00:58.459092 master-0 kubenswrapper[7784]: I0223 13:00:58.458996 7784 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:00:58.460517 master-0 kubenswrapper[7784]: I0223 13:00:58.460449 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz","kube-system/bootstrap-kube-controller-manager-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd","openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr","openshift-network-operator/network-operator-7d7db75979-q7q5x","openshift-ovn-kubernetes/ovnkube-node-qz8dt","assisted-installer/assisted-installer-controller-nktl9","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-additional-cni-plugins-srlm4","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz","kube-system/bootstrap-kube-scheduler-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4","openshift-multus/multus-6lk7x","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p","openshift-dns-operator/dns-operator-8c7d49845-g8fdn","openshift-etcd/etcd-master-0-master-0","openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl","openshift-ingress-operator/ingress-operator-6569778c84-k9h69","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45","openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl","openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj","openshift-network-node-identity/network-node-identity-zr6kv","openshift-network-operator/iptables-alerter-qg27h","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz","openshift-network-diagnostics/network-check-target-rnz52","openshift-marketplace/marketplace-operator-6f5488b997-588zk","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g","openshift-multus/network-metrics-daemon-bbrcr"] Feb 23 13:00:58.460807 master-0 kubenswrapper[7784]: I0223 13:00:58.460768 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 13:00:58.461115 master-0 kubenswrapper[7784]: I0223 13:00:58.461073 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.461177 master-0 kubenswrapper[7784]: I0223 13:00:58.461118 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.461396 master-0 kubenswrapper[7784]: I0223 13:00:58.461303 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.461801 master-0 kubenswrapper[7784]: I0223 13:00:58.461762 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.462589 master-0 kubenswrapper[7784]: I0223 13:00:58.462524 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.462589 master-0 kubenswrapper[7784]: I0223 13:00:58.462565 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:58.462589 master-0 kubenswrapper[7784]: I0223 13:00:58.462544 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:58.463959 master-0 kubenswrapper[7784]: I0223 13:00:58.463892 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.466753 master-0 kubenswrapper[7784]: I0223 13:00:58.466696 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:58.467014 master-0 kubenswrapper[7784]: I0223 13:00:58.466964 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:58.467186 master-0 kubenswrapper[7784]: I0223 13:00:58.467142 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:00:58.467424 master-0 kubenswrapper[7784]: I0223 13:00:58.467382 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:00:58.467531 master-0 kubenswrapper[7784]: I0223 13:00:58.467483 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:00:58.467713 master-0 kubenswrapper[7784]: I0223 13:00:58.467690 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:00:58.468267 master-0 kubenswrapper[7784]: I0223 13:00:58.468214 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:00:58.468428 master-0 kubenswrapper[7784]: I0223 13:00:58.468410 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:58.468575 master-0 kubenswrapper[7784]: I0223 13:00:58.468544 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:00:58.469215 master-0 kubenswrapper[7784]: I0223 13:00:58.469184 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:00:58.469274 master-0 kubenswrapper[7784]: I0223 13:00:58.469249 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:00:58.469274 master-0 kubenswrapper[7784]: I0223 13:00:58.469190 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:00:58.469503 master-0 kubenswrapper[7784]: I0223 13:00:58.469462 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:00:58.469549 master-0 kubenswrapper[7784]: I0223 13:00:58.469534 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:00:58.471793 master-0 kubenswrapper[7784]: I0223 13:00:58.471751 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:00:58.480600 master-0 kubenswrapper[7784]: I0223 13:00:58.480548 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:00:58.482265 master-0 kubenswrapper[7784]: I0223 13:00:58.482193 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:00:58.482265 master-0 kubenswrapper[7784]: I0223 13:00:58.482259 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:00:58.482470 master-0 kubenswrapper[7784]: I0223 13:00:58.482432 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.482510 master-0 kubenswrapper[7784]: I0223 13:00:58.482480 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:00:58.482650 master-0 kubenswrapper[7784]: I0223 13:00:58.482624 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.482679 master-0 kubenswrapper[7784]: I0223 13:00:58.482663 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:00:58.486400 master-0 kubenswrapper[7784]: I0223 13:00:58.486369 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:00:58.488594 master-0 kubenswrapper[7784]: I0223 13:00:58.488561 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:00:58.488693 master-0 kubenswrapper[7784]: I0223 13:00:58.488619 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:00:58.488814 master-0 kubenswrapper[7784]: I0223 13:00:58.488789 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:00:58.489332 master-0 kubenswrapper[7784]: I0223 13:00:58.489299 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.489410 master-0 kubenswrapper[7784]: I0223 13:00:58.489383 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:00:58.489732 master-0 kubenswrapper[7784]: I0223 13:00:58.489698 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:00:58.489861 master-0 kubenswrapper[7784]: I0223 13:00:58.489778 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:00:58.490032 master-0 kubenswrapper[7784]: I0223 13:00:58.490002 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:00:58.490174 master-0 kubenswrapper[7784]: I0223 13:00:58.490149 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:00:58.490337 master-0 kubenswrapper[7784]: I0223 13:00:58.490310 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.490506 master-0 kubenswrapper[7784]: I0223 13:00:58.490481 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.490617 master-0 kubenswrapper[7784]: I0223 13:00:58.490598 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:00:58.490681 master-0 kubenswrapper[7784]: I0223 13:00:58.490664 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.490905 master-0 kubenswrapper[7784]: I0223 13:00:58.490878 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:00:58.491132 master-0 kubenswrapper[7784]: I0223 13:00:58.491101 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:00:58.491471 master-0 kubenswrapper[7784]: I0223 13:00:58.491451 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:00:58.491546 master-0 kubenswrapper[7784]: I0223 13:00:58.491524 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:00:58.491894 master-0 kubenswrapper[7784]: I0223 13:00:58.491864 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:00:58.492287 master-0 kubenswrapper[7784]: I0223 13:00:58.492258 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:00:58.493835 master-0 kubenswrapper[7784]: I0223 13:00:58.493795 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:00:58.493885 master-0 kubenswrapper[7784]: I0223 13:00:58.493852 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.493971 master-0 kubenswrapper[7784]: I0223 13:00:58.493940 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:00:58.494139 master-0 kubenswrapper[7784]: I0223 13:00:58.494106 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:00:58.494139 master-0 kubenswrapper[7784]: I0223 13:00:58.494125 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:00:58.494224 master-0 kubenswrapper[7784]: I0223 13:00:58.494189 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:58.494489 master-0 kubenswrapper[7784]: I0223 13:00:58.494456 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:00:58.494543 master-0 kubenswrapper[7784]: I0223 13:00:58.494464 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:00:58.494543 master-0 kubenswrapper[7784]: I0223 13:00:58.494536 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:00:58.494603 master-0 kubenswrapper[7784]: I0223 13:00:58.494528 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:00:58.495562 master-0 kubenswrapper[7784]: I0223 13:00:58.495533 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:00:58.496830 master-0 kubenswrapper[7784]: I0223 13:00:58.496792 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:00:58.497303 master-0 kubenswrapper[7784]: I0223 13:00:58.497273 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.497881 master-0 kubenswrapper[7784]: I0223 13:00:58.497844 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.498203 master-0 kubenswrapper[7784]: I0223 13:00:58.498163 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.498464 master-0 kubenswrapper[7784]: I0223 13:00:58.498434 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:00:58.498619 master-0 kubenswrapper[7784]: I0223 13:00:58.498585 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:00:58.499083 master-0 kubenswrapper[7784]: I0223 13:00:58.499049 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:00:58.499547 master-0 kubenswrapper[7784]: I0223 13:00:58.499514 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:00:58.499887 master-0 kubenswrapper[7784]: I0223 13:00:58.499855 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:00:58.499982 master-0 kubenswrapper[7784]: I0223 13:00:58.499950 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.500263 master-0 kubenswrapper[7784]: I0223 13:00:58.500231 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:00:58.505890 master-0 kubenswrapper[7784]: I0223 13:00:58.505790 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:58.506244 master-0 kubenswrapper[7784]: I0223 13:00:58.506064 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:00:58.506595 master-0 kubenswrapper[7784]: I0223 13:00:58.506562 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:00:58.506686 master-0 kubenswrapper[7784]: I0223 13:00:58.506661 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:00:58.506686 master-0 kubenswrapper[7784]: I0223 13:00:58.506662 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:58.506868 master-0 kubenswrapper[7784]: I0223 13:00:58.506807 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:00:58.506941 master-0 kubenswrapper[7784]: I0223 13:00:58.506909 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:00:58.507221 master-0 kubenswrapper[7784]: I0223 13:00:58.507172 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:00:58.507221 master-0 kubenswrapper[7784]: I0223 13:00:58.507205 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:00:58.507416 master-0 kubenswrapper[7784]: I0223 13:00:58.507307 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:00:58.507525 master-0 kubenswrapper[7784]: I0223 13:00:58.507500 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:00:58.507918 master-0 kubenswrapper[7784]: I0223 13:00:58.507884 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:00:58.508136 master-0 kubenswrapper[7784]: I0223 13:00:58.508107 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:00:58.508203 master-0 kubenswrapper[7784]: I0223 13:00:58.507175 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:00:58.508319 master-0 kubenswrapper[7784]: I0223 13:00:58.508281 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:00:58.508319 master-0 kubenswrapper[7784]: I0223 13:00:58.508305 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:00:58.508537 master-0 kubenswrapper[7784]: I0223 13:00:58.508365 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:00:58.508537 master-0 kubenswrapper[7784]: I0223 13:00:58.508457 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:00:58.508537 master-0 kubenswrapper[7784]: I0223 13:00:58.508508 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:00:58.508537 master-0 kubenswrapper[7784]: I0223 13:00:58.508522 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:00:58.508852 master-0 kubenswrapper[7784]: I0223 13:00:58.508617 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:58.508852 master-0 kubenswrapper[7784]: I0223 13:00:58.508665 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:00:58.508852 master-0 kubenswrapper[7784]: I0223 13:00:58.508815 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:00:58.509013 master-0 kubenswrapper[7784]: I0223 13:00:58.508910 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:00:58.509147 master-0 kubenswrapper[7784]: I0223 13:00:58.509096 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:00:58.509248 master-0 kubenswrapper[7784]: I0223 13:00:58.509152 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:00:58.509485 master-0 kubenswrapper[7784]: I0223 13:00:58.507416 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:00:58.509606 master-0 kubenswrapper[7784]: I0223 13:00:58.509587 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:00:58.509606 master-0 kubenswrapper[7784]: I0223 13:00:58.509591 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:00:58.509868 master-0 kubenswrapper[7784]: I0223 13:00:58.509837 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:00:58.510699 master-0 kubenswrapper[7784]: I0223 13:00:58.510657 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:00:58.510985 master-0 kubenswrapper[7784]: I0223 13:00:58.510944 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:00:58.520465 master-0 kubenswrapper[7784]: I0223 13:00:58.520385 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:00:58.528330 master-0 kubenswrapper[7784]: I0223 13:00:58.528286 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:00:58.534808 master-0 kubenswrapper[7784]: I0223 13:00:58.534737 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:00:58.534932 master-0 kubenswrapper[7784]: I0223 13:00:58.534736 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:00:58.535427 master-0 kubenswrapper[7784]: I0223 13:00:58.535321 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:00:58.536384 master-0 kubenswrapper[7784]: I0223 13:00:58.536314 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:00:58.560277 master-0 kubenswrapper[7784]: I0223 13:00:58.560197 7784 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 13:00:58.594451 master-0 kubenswrapper[7784]: I0223 13:00:58.594366 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.594451 master-0 kubenswrapper[7784]: I0223 13:00:58.594410 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.594451 master-0 kubenswrapper[7784]: I0223 13:00:58.594431 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:58.594451 master-0 kubenswrapper[7784]: I0223 13:00:58.594453 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.594451 master-0 kubenswrapper[7784]: I0223 13:00:58.594474 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594495 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594512 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594528 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594547 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594562 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594580 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594599 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594614 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594632 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594652 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594669 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594689 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594712 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594741 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594762 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594784 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594805 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594824 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594851 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594871 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594892 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594912 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594935 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.594961 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.595006 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.595030 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.595050 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.595069 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.595033 master-0 kubenswrapper[7784]: I0223 13:00:58.595089 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595120 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595141 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595163 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595183 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595207 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595233 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595250 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595268 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595287 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595323 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595358 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595381 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595403 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595422 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595441 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595460 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595478 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595497 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595517 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595535 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595554 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.595811 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.596996 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.597110 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.597196 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.597562 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.598084 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.598445 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.598506 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.598507 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.598791 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.599876 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600041 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600451 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600625 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600674 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600926 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.600982 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.601021 master-0 kubenswrapper[7784]: I0223 13:00:58.601107 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601166 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601183 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601203 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601224 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601280 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601249 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601332 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601492 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601548 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601548 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601591 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601664 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601741 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601783 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.601893 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602004 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602140 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602138 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602205 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602314 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602425 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602596 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602741 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602813 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.602954 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.603217 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.603226 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.603375 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604183 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604239 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604452 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604546 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604749 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.604958 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605155 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605310 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605427 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605539 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605694 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605708 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.605989 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606077 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606232 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606322 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606396 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606461 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606478 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606700 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.606633 master-0 kubenswrapper[7784]: I0223 13:00:58.606659 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.606936 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.607057 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.607063 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.607960 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.607994 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608018 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608041 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608295 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608334 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608377 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608408 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608433 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608489 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608567 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608602 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608635 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608755 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608766 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608788 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608938 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.608963 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609001 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609022 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609023 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609082 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609111 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609139 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609161 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609185 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609207 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609231 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609233 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609390 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609454 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609494 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609532 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609573 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609704 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609762 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609808 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609850 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609887 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.609835 master-0 kubenswrapper[7784]: I0223 13:00:58.609924 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.609959 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.609997 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610027 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610053 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610064 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610081 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610128 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610187 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610269 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:58.612607 master-0 kubenswrapper[7784]: I0223 13:00:58.610399 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: E0223 13:00:58.619291 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: E0223 13:00:58.620001 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: W0223 13:00:58.620639 7784 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: E0223 13:00:58.620702 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: E0223 13:00:58.620966 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:00:58.622702 master-0 kubenswrapper[7784]: E0223 13:00:58.621328 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:00:58.633714 master-0 kubenswrapper[7784]: I0223 13:00:58.633638 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:00:58.640483 master-0 kubenswrapper[7784]: I0223 13:00:58.640409 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.711000 master-0 kubenswrapper[7784]: I0223 13:00:58.710835 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711000 master-0 kubenswrapper[7784]: I0223 13:00:58.710907 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711000 master-0 kubenswrapper[7784]: I0223 13:00:58.710941 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.711000 master-0 kubenswrapper[7784]: I0223 13:00:58.710977 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711000 master-0 kubenswrapper[7784]: I0223 13:00:58.711007 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711048 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711094 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711123 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711151 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711170 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711211 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711238 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711258 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711276 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711311 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711373 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711407 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711433 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711462 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711489 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711509 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711527 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.711554 master-0 kubenswrapper[7784]: I0223 13:00:58.711547 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.711612 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.711843 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.711933 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.711963 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.711999 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712043 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712069 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712075 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712106 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: E0223 13:00:58.712230 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712256 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712311 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: E0223 13:00:58.712325 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.212303134 +0000 UTC m=+1.947156777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712396 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712444 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712491 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712489 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712508 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712530 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712565 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712599 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712609 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712620 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712659 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712553 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712627 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712703 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: E0223 13:00:58.712657 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712766 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: E0223 13:00:58.712784 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: E0223 13:00:58.712734 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:00:58.712780 master-0 kubenswrapper[7784]: I0223 13:00:58.712844 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.712850 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.712844 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.212804616 +0000 UTC m=+1.947658299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.712916 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.712932 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.212899509 +0000 UTC m=+1.947753172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.712953 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.21294316 +0000 UTC m=+1.947796813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.712957 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713003 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713038 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713051 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713021 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713032 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713011 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713114 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713091 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213079423 +0000 UTC m=+1.947933066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713182 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213162905 +0000 UTC m=+1.948016578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713223 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713254 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713268 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713313 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213292698 +0000 UTC m=+1.948146341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713332 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713381 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713401 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713413 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713422 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213410351 +0000 UTC m=+1.948264204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713429 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713448 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713462 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213444462 +0000 UTC m=+1.948298145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713463 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713493 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713517 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713535 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713494 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.713571 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.213540474 +0000 UTC m=+1.948394157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713618 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713658 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713669 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713697 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713723 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713675 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713762 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713809 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713828 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713865 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713885 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713895 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713952 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713949 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.713998 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.714068 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.714080 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: E0223 13:00:58.714118 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:59.214107809 +0000 UTC m=+1.948961452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.714115 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:58.714997 master-0 kubenswrapper[7784]: I0223 13:00:58.714003 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:58.770288 master-0 kubenswrapper[7784]: I0223 13:00:58.769732 7784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:00:59.221232 master-0 kubenswrapper[7784]: I0223 13:00:59.221114 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:59.221571 master-0 kubenswrapper[7784]: E0223 13:00:59.221397 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:59.221571 master-0 kubenswrapper[7784]: I0223 13:00:59.221419 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:59.221571 master-0 kubenswrapper[7784]: E0223 13:00:59.221521 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.22149361 +0000 UTC m=+2.956347253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:00:59.221719 master-0 kubenswrapper[7784]: I0223 13:00:59.221564 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:59.221719 master-0 kubenswrapper[7784]: I0223 13:00:59.221613 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:59.221719 master-0 kubenswrapper[7784]: E0223 13:00:59.221619 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: E0223 13:00:59.221729 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: E0223 13:00:59.221757 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.221733406 +0000 UTC m=+2.956587049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: E0223 13:00:59.221675 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: E0223 13:00:59.221780 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.221770817 +0000 UTC m=+2.956624460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: E0223 13:00:59.221797 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.221787757 +0000 UTC m=+2.956641400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: I0223 13:00:59.221824 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: I0223 13:00:59.221848 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:59.221867 master-0 kubenswrapper[7784]: I0223 13:00:59.221875 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: I0223 13:00:59.221956 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: I0223 13:00:59.221977 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: I0223 13:00:59.222014 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: I0223 13:00:59.222064 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: E0223 13:00:59.222133 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: E0223 13:00:59.222156 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222149277 +0000 UTC m=+2.957002920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: E0223 13:00:59.222198 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:59.222209 master-0 kubenswrapper[7784]: E0223 13:00:59.222218 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222212348 +0000 UTC m=+2.957065991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222261 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222283 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.22227596 +0000 UTC m=+2.957129603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222325 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222364 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222357892 +0000 UTC m=+2.957211535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222405 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222424 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222418473 +0000 UTC m=+2.957272116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222461 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222481 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222474885 +0000 UTC m=+2.957328528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222522 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:59.222554 master-0 kubenswrapper[7784]: E0223 13:00:59.222542 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.222537086 +0000 UTC m=+2.957390729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:00:59.322215 master-0 kubenswrapper[7784]: I0223 13:00:59.322142 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.445384 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.445848 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.446281 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.446459 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.446516 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:00:59.448692 master-0 kubenswrapper[7784]: I0223 13:00:59.446925 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:00:59.449592 master-0 kubenswrapper[7784]: I0223 13:00:59.449513 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:00:59.450318 master-0 kubenswrapper[7784]: I0223 13:00:59.450256 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:00:59.453542 master-0 kubenswrapper[7784]: I0223 13:00:59.452502 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:00:59.453943 master-0 kubenswrapper[7784]: I0223 13:00:59.453909 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:00:59.461440 master-0 kubenswrapper[7784]: I0223 13:00:59.457784 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:00:59.461440 master-0 kubenswrapper[7784]: I0223 13:00:59.458917 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:00:59.461440 master-0 kubenswrapper[7784]: I0223 13:00:59.459185 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:00:59.461440 master-0 kubenswrapper[7784]: I0223 13:00:59.459943 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:00:59.679996 master-0 kubenswrapper[7784]: I0223 13:00:59.671663 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:00:59.679996 master-0 kubenswrapper[7784]: I0223 13:00:59.672272 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:00:59.679996 master-0 kubenswrapper[7784]: I0223 13:00:59.677944 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:00:59.679996 master-0 kubenswrapper[7784]: I0223 13:00:59.678776 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.682683 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.684223 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.684227 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.702638 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.703265 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:00:59.704032 master-0 kubenswrapper[7784]: I0223 13:00:59.703982 7784 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 13:00:59.714418 master-0 kubenswrapper[7784]: I0223 13:00:59.704653 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:00:59.714418 master-0 kubenswrapper[7784]: I0223 13:00:59.705119 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:00:59.714418 master-0 kubenswrapper[7784]: I0223 13:00:59.707187 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:00:59.715006 master-0 kubenswrapper[7784]: I0223 13:00:59.714752 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:00:59.741414 master-0 kubenswrapper[7784]: I0223 13:00:59.739573 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:00:59.968824 master-0 kubenswrapper[7784]: I0223 13:00:59.968625 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:01:00.258356 master-0 kubenswrapper[7784]: I0223 13:01:00.257889 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:00.258356 master-0 kubenswrapper[7784]: I0223 13:01:00.257947 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:00.258356 master-0 kubenswrapper[7784]: I0223 13:01:00.257975 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:00.258356 master-0 kubenswrapper[7784]: E0223 13:01:00.258125 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:00.258356 master-0 kubenswrapper[7784]: E0223 13:01:00.258198 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.258179441 +0000 UTC m=+4.993033084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.258750 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.258781 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.258772036 +0000 UTC m=+4.993625679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.258830 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.258849 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.258842757 +0000 UTC m=+4.993696400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258872 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258888 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258907 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258931 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258955 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258975 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.258992 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: I0223 13:01:00.259007 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259062 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259082 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259076553 +0000 UTC m=+4.993930196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259113 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259130 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259124154 +0000 UTC m=+4.993977797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259157 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259175 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259168515 +0000 UTC m=+4.994022158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259206 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259221 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259216406 +0000 UTC m=+4.994070049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259307 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259363 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259321599 +0000 UTC m=+4.994175242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259404 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:00.259465 master-0 kubenswrapper[7784]: E0223 13:01:00.259425 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259419052 +0000 UTC m=+4.994272695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:01:00.260215 master-0 kubenswrapper[7784]: E0223 13:01:00.259631 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:00.260215 master-0 kubenswrapper[7784]: E0223 13:01:00.259784 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.25974214 +0000 UTC m=+4.994595823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:00.260215 master-0 kubenswrapper[7784]: E0223 13:01:00.259912 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:00.260215 master-0 kubenswrapper[7784]: E0223 13:01:00.259964 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.259948535 +0000 UTC m=+4.994802218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:01:00.477020 master-0 kubenswrapper[7784]: I0223 13:01:00.476900 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:00.970473 master-0 kubenswrapper[7784]: E0223 13:01:00.970036 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 23 13:01:00.970473 master-0 kubenswrapper[7784]: E0223 13:01:00.970377 7784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9lvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-qg27h_openshift-network-operator(b8bdbf92-61e3-41e9-a48d-4259cee80e9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:01:00.971727 master-0 kubenswrapper[7784]: E0223 13:01:00.971656 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-qg27h" podUID="b8bdbf92-61e3-41e9-a48d-4259cee80e9f" Feb 23 13:01:01.019307 master-0 kubenswrapper[7784]: I0223 13:01:01.019243 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:01.326243 master-0 kubenswrapper[7784]: I0223 13:01:01.326192 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:01.540650 master-0 kubenswrapper[7784]: E0223 13:01:01.540579 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" Feb 23 13:01:01.541136 master-0 kubenswrapper[7784]: E0223 13:01:01.540857 7784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcqzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-c48c8bf7c-mvkrz_openshift-service-ca-operator(ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:01:01.542279 master-0 kubenswrapper[7784]: E0223 13:01:01.542228 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" podUID="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" Feb 23 13:01:01.620086 master-0 kubenswrapper[7784]: I0223 13:01:01.619914 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=7.619893508 podStartE2EDuration="7.619893508s" podCreationTimestamp="2026-02-23 13:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:01.619030667 +0000 UTC m=+4.353884330" watchObservedRunningTime="2026-02-23 13:01:01.619893508 +0000 UTC m=+4.354747151" Feb 23 13:01:02.000505 master-0 kubenswrapper[7784]: I0223 13:01:02.000291 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:02.032603 master-0 kubenswrapper[7784]: I0223 13:01:02.032554 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:02.091161 master-0 kubenswrapper[7784]: I0223 13:01:02.091111 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:02.125922 master-0 kubenswrapper[7784]: I0223 13:01:02.125849 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:02.271818 master-0 kubenswrapper[7784]: E0223 13:01:02.271726 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: E0223 13:01:02.272060 7784 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: echo "Copying system trust bundle" Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: fi Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.33_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wr82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-5bd7c86784-rlbcj_openshift-authentication-operator(f348bffa-b2f6-4695-88a7-923625e7fb02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 23 13:01:02.272124 master-0 kubenswrapper[7784]: > logger="UnhandledError" Feb 23 13:01:02.273841 master-0 kubenswrapper[7784]: E0223 13:01:02.273742 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" Feb 23 13:01:02.283434 master-0 kubenswrapper[7784]: I0223 13:01:02.283365 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:02.283532 master-0 kubenswrapper[7784]: I0223 13:01:02.283444 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:02.283532 master-0 kubenswrapper[7784]: I0223 13:01:02.283479 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:02.283726 master-0 kubenswrapper[7784]: E0223 13:01:02.283658 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:02.283779 master-0 kubenswrapper[7784]: E0223 13:01:02.283752 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:02.283779 master-0 kubenswrapper[7784]: I0223 13:01:02.283696 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:02.283841 master-0 kubenswrapper[7784]: E0223 13:01:02.283775 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.283750546 +0000 UTC m=+9.018604189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:01:02.283907 master-0 kubenswrapper[7784]: E0223 13:01:02.283866 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.283848028 +0000 UTC m=+9.018701671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:02.283945 master-0 kubenswrapper[7784]: E0223 13:01:02.283665 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:02.283974 master-0 kubenswrapper[7784]: I0223 13:01:02.283937 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:02.284002 master-0 kubenswrapper[7784]: I0223 13:01:02.283983 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:01:02.284047 master-0 kubenswrapper[7784]: E0223 13:01:02.284025 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.283996463 +0000 UTC m=+9.018850106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:01:02.284095 master-0 kubenswrapper[7784]: E0223 13:01:02.283785 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:02.284095 master-0 kubenswrapper[7784]: I0223 13:01:02.284059 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:02.284095 master-0 kubenswrapper[7784]: E0223 13:01:02.284076 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:02.284176 master-0 kubenswrapper[7784]: E0223 13:01:02.284115 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284088315 +0000 UTC m=+9.018942168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:01:02.284176 master-0 kubenswrapper[7784]: E0223 13:01:02.284140 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284130246 +0000 UTC m=+9.018984119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:01:02.284176 master-0 kubenswrapper[7784]: E0223 13:01:02.284157 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:02.284176 master-0 kubenswrapper[7784]: I0223 13:01:02.284170 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284192 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284179577 +0000 UTC m=+9.019033220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284197 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: I0223 13:01:02.284215 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284246 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284231978 +0000 UTC m=+9.019085821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: I0223 13:01:02.284274 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284307 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284348 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284329481 +0000 UTC m=+9.019183124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: I0223 13:01:02.284306 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:01:02.284335 master-0 kubenswrapper[7784]: E0223 13:01:02.284367 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:02.284740 master-0 kubenswrapper[7784]: E0223 13:01:02.284383 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:02.284740 master-0 kubenswrapper[7784]: E0223 13:01:02.284392 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284384972 +0000 UTC m=+9.019238615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:02.284740 master-0 kubenswrapper[7784]: E0223 13:01:02.284398 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:02.284740 master-0 kubenswrapper[7784]: E0223 13:01:02.284429 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284413073 +0000 UTC m=+9.019266936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:01:02.284740 master-0 kubenswrapper[7784]: E0223 13:01:02.284453 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:06.284442933 +0000 UTC m=+9.019296796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:02.597218 master-0 kubenswrapper[7784]: I0223 13:01:02.597049 7784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:02.631307 master-0 kubenswrapper[7784]: I0223 13:01:02.631258 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:02.813998 master-0 kubenswrapper[7784]: E0223 13:01:02.813925 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" Feb 23 13:01:02.814291 master-0 kubenswrapper[7784]: E0223 13:01:02.814205 7784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z9jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-584cc7bcb5-dpxl4_openshift-controller-manager-operator(d71885db-c29e-429a-aa1f-1c274796a69f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:01:02.815566 master-0 kubenswrapper[7784]: E0223 13:01:02.815492 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" podUID="d71885db-c29e-429a-aa1f-1c274796a69f" Feb 23 13:01:02.899047 master-0 kubenswrapper[7784]: I0223 13:01:02.898861 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:02.901678 master-0 kubenswrapper[7784]: I0223 13:01:02.901619 7784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:02.903873 master-0 kubenswrapper[7784]: I0223 13:01:02.903817 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:03.428601 master-0 kubenswrapper[7784]: E0223 13:01:03.428537 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" Feb 23 13:01:03.428769 master-0 kubenswrapper[7784]: E0223 13:01:03.428712 7784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wplcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-8586dccc9b-zh69g_openshift-apiserver-operator(7d0a976c-1492-4989-a5ff-e386564dd6ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:01:03.430304 master-0 kubenswrapper[7784]: E0223 13:01:03.430245 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" podUID="7d0a976c-1492-4989-a5ff-e386564dd6ba" Feb 23 13:01:03.600470 master-0 kubenswrapper[7784]: I0223 13:01:03.600390 7784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:03.959776 master-0 kubenswrapper[7784]: E0223 13:01:03.959716 7784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" Feb 23 13:01:03.959943 master-0 kubenswrapper[7784]: E0223 13:01:03.959893 7784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-77cd4d9559-cz4nt_openshift-kube-scheduler-operator(3daf0176-92e7-4642-8643-4afbefb77235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:01:03.961213 master-0 kubenswrapper[7784]: E0223 13:01:03.961150 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" podUID="3daf0176-92e7-4642-8643-4afbefb77235" Feb 23 13:01:04.161536 master-0 kubenswrapper[7784]: I0223 13:01:04.149039 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-rnz52"] Feb 23 13:01:04.198140 master-0 kubenswrapper[7784]: W0223 13:01:04.197675 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf81886b9_fcd3_4666_b550_0688072210f7.slice/crio-683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40 WatchSource:0}: Error finding container 683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40: Status 404 returned error can't find the container with id 683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40 Feb 23 13:01:04.615325 master-0 kubenswrapper[7784]: I0223 13:01:04.615249 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" event={"ID":"4b9d6485-cf67-49c5-99c1-b8582a0bab70","Type":"ContainerStarted","Data":"9e1ed7ebf6d1fa17181b895f05d45d093802e57011b02b870185acec2590ca56"} Feb 23 13:01:04.619402 master-0 kubenswrapper[7784]: I0223 13:01:04.618848 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerStarted","Data":"f9f2d3833534ce883ca50eb44438eaa5f1540dd7900a3929b7c7f66a4a78289a"} Feb 23 13:01:04.620651 master-0 kubenswrapper[7784]: I0223 13:01:04.620603 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" event={"ID":"71cb2f21-6d27-411f-9c2f-d5fa286895a7","Type":"ContainerStarted","Data":"44ecc8bd157550465c3780c8f90979b8897639b6eed19a94cadcc31f44d1bf1b"} Feb 23 13:01:04.622441 master-0 kubenswrapper[7784]: I0223 13:01:04.622403 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" event={"ID":"3a6b0d84-a344-43e4-b9c4-c8e0670528de","Type":"ContainerStarted","Data":"f41bbfdb7f3332d7cf43817f8495af6ada5a69e9698540f12848e6c0a2e50947"} Feb 23 13:01:04.624650 master-0 kubenswrapper[7784]: I0223 13:01:04.624610 7784 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="0174cb89e2935df02e69f5ff93da85ad9ed1219108156764e83f071d6c6cbae7" exitCode=0 Feb 23 13:01:04.624710 master-0 kubenswrapper[7784]: I0223 13:01:04.624665 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerDied","Data":"0174cb89e2935df02e69f5ff93da85ad9ed1219108156764e83f071d6c6cbae7"} Feb 23 13:01:04.627036 master-0 kubenswrapper[7784]: I0223 13:01:04.627006 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-rnz52" event={"ID":"f81886b9-fcd3-4666-b550-0688072210f7","Type":"ContainerStarted","Data":"fe9f982f97066798d10a2a6eda10e81734aa6bfc9fadfc74111dbcb92044184d"} Feb 23 13:01:04.627085 master-0 kubenswrapper[7784]: I0223 13:01:04.627037 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-rnz52" event={"ID":"f81886b9-fcd3-4666-b550-0688072210f7","Type":"ContainerStarted","Data":"683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40"} Feb 23 13:01:05.631839 master-0 kubenswrapper[7784]: I0223 13:01:05.631575 7784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: I0223 13:01:05.721024 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g"] Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: E0223 13:01:05.721278 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: I0223 13:01:05.721296 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: E0223 13:01:05.721309 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: I0223 13:01:05.721316 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: I0223 13:01:05.721444 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:01:05.721693 master-0 kubenswrapper[7784]: I0223 13:01:05.721454 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc34173-350b-40a9-a164-e500e96caf74" containerName="prober" Feb 23 13:01:05.722207 master-0 kubenswrapper[7784]: I0223 13:01:05.721772 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq"] Feb 23 13:01:05.722207 master-0 kubenswrapper[7784]: I0223 13:01:05.722063 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:01:05.722849 master-0 kubenswrapper[7784]: I0223 13:01:05.722437 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:01:05.728066 master-0 kubenswrapper[7784]: I0223 13:01:05.725493 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:01:05.728066 master-0 kubenswrapper[7784]: I0223 13:01:05.725635 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:01:05.730713 master-0 kubenswrapper[7784]: I0223 13:01:05.730682 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq"] Feb 23 13:01:05.733736 master-0 kubenswrapper[7784]: I0223 13:01:05.733705 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g"] Feb 23 13:01:05.831230 master-0 kubenswrapper[7784]: I0223 13:01:05.831147 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zks\" (UniqueName: \"kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks\") pod \"migrator-5c85bff57-xzh2g\" (UID: \"8a544f5a-06b6-4297-a845-d81e9ab9ece7\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:01:05.831562 master-0 kubenswrapper[7784]: I0223 13:01:05.831542 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dcr\" (UniqueName: \"kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr\") pod \"csi-snapshot-controller-6847bb4785-zw4nq\" (UID: \"5793184d-de96-49ad-a060-0fa0cf278a9c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:01:05.933179 master-0 kubenswrapper[7784]: I0223 13:01:05.932938 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zks\" (UniqueName: \"kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks\") pod \"migrator-5c85bff57-xzh2g\" (UID: \"8a544f5a-06b6-4297-a845-d81e9ab9ece7\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:01:05.933600 master-0 kubenswrapper[7784]: I0223 13:01:05.933579 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dcr\" (UniqueName: \"kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr\") pod \"csi-snapshot-controller-6847bb4785-zw4nq\" (UID: \"5793184d-de96-49ad-a060-0fa0cf278a9c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:01:05.955706 master-0 kubenswrapper[7784]: I0223 13:01:05.955650 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dcr\" (UniqueName: \"kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr\") pod \"csi-snapshot-controller-6847bb4785-zw4nq\" (UID: \"5793184d-de96-49ad-a060-0fa0cf278a9c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:01:05.956877 master-0 kubenswrapper[7784]: I0223 13:01:05.956844 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zks\" (UniqueName: \"kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks\") pod \"migrator-5c85bff57-xzh2g\" (UID: \"8a544f5a-06b6-4297-a845-d81e9ab9ece7\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:01:06.039109 master-0 kubenswrapper[7784]: I0223 13:01:06.039007 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:01:06.059764 master-0 kubenswrapper[7784]: I0223 13:01:06.059253 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:01:06.087530 master-0 kubenswrapper[7784]: I0223 13:01:06.086949 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:01:06.262599 master-0 kubenswrapper[7784]: I0223 13:01:06.262543 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq"] Feb 23 13:01:06.271902 master-0 kubenswrapper[7784]: W0223 13:01:06.271851 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5793184d_de96_49ad_a060_0fa0cf278a9c.slice/crio-a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3 WatchSource:0}: Error finding container a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3: Status 404 returned error can't find the container with id a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3 Feb 23 13:01:06.289264 master-0 kubenswrapper[7784]: I0223 13:01:06.289220 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.294710 master-0 kubenswrapper[7784]: I0223 13:01:06.294675 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.303435 master-0 kubenswrapper[7784]: I0223 13:01:06.303371 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g"] Feb 23 13:01:06.338176 master-0 kubenswrapper[7784]: I0223 13:01:06.338102 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:06.338176 master-0 kubenswrapper[7784]: I0223 13:01:06.338151 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:06.338176 master-0 kubenswrapper[7784]: I0223 13:01:06.338184 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: I0223 13:01:06.338324 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: E0223 13:01:06.338327 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: E0223 13:01:06.338448 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338422294 +0000 UTC m=+17.073275937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: E0223 13:01:06.338494 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: E0223 13:01:06.338593 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338566018 +0000 UTC m=+17.073419831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:01:06.338626 master-0 kubenswrapper[7784]: E0223 13:01:06.338615 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338674 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.33865701 +0000 UTC m=+17.073510663 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338670 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338627 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338712 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338727 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338734 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338720821 +0000 UTC m=+17.073574774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338776 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338803 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338809 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338807 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338839 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338815045 +0000 UTC m=+17.073668688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338861 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338852126 +0000 UTC m=+17.073705769 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338878 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338884 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: I0223 13:01:06.338907 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:06.338897 master-0 kubenswrapper[7784]: E0223 13:01:06.338918 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338906197 +0000 UTC m=+17.073759850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: I0223 13:01:06.338936 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.338983 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.338973439 +0000 UTC m=+17.073827082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339030 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339058 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.339052421 +0000 UTC m=+17.073906054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339072 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339098 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.339090251 +0000 UTC m=+17.073943894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339099 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:06.339526 master-0 kubenswrapper[7784]: E0223 13:01:06.339124 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.339118402 +0000 UTC m=+17.073972045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:01:06.367738 master-0 kubenswrapper[7784]: I0223 13:01:06.367688 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.373781 master-0 kubenswrapper[7784]: I0223 13:01:06.373748 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.643007 master-0 kubenswrapper[7784]: I0223 13:01:06.642868 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" event={"ID":"8a544f5a-06b6-4297-a845-d81e9ab9ece7","Type":"ContainerStarted","Data":"f575cb15d53ccde2ef110c34dc5bda0d2dd2200d5c840f4afa64c209dc8f16aa"} Feb 23 13:01:06.644063 master-0 kubenswrapper[7784]: I0223 13:01:06.643921 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3"} Feb 23 13:01:06.648167 master-0 kubenswrapper[7784]: I0223 13:01:06.648101 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.649511 master-0 kubenswrapper[7784]: I0223 13:01:06.649479 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:07.660254 master-0 kubenswrapper[7784]: I0223 13:01:07.659760 7784 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="b171809f650608e045a0b142cf5e322c52f3d63ebd75fa84b47d19cacf980b23" exitCode=0 Feb 23 13:01:07.661488 master-0 kubenswrapper[7784]: I0223 13:01:07.659955 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerDied","Data":"b171809f650608e045a0b142cf5e322c52f3d63ebd75fa84b47d19cacf980b23"} Feb 23 13:01:08.666513 master-0 kubenswrapper[7784]: I0223 13:01:08.666386 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"8dbfb3a49d15de4419fc29dce0193ff2a8f2f1238053d11c98101bb8a51adb15"} Feb 23 13:01:09.679512 master-0 kubenswrapper[7784]: I0223 13:01:09.679065 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" event={"ID":"8a544f5a-06b6-4297-a845-d81e9ab9ece7","Type":"ContainerStarted","Data":"0201018804f6ea85ed477b805beca474e8c9c59db1c076d79e61886ec2d16484"} Feb 23 13:01:09.679512 master-0 kubenswrapper[7784]: I0223 13:01:09.679141 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" event={"ID":"8a544f5a-06b6-4297-a845-d81e9ab9ece7","Type":"ContainerStarted","Data":"b2187ac5d555e961554e28cdf8fc982dd799daeb877ae01bc505c4fe8d1e1515"} Feb 23 13:01:09.697301 master-0 kubenswrapper[7784]: I0223 13:01:09.697191 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podStartSLOduration=2.566766443 podStartE2EDuration="4.697173482s" podCreationTimestamp="2026-02-23 13:01:05 +0000 UTC" firstStartedPulling="2026-02-23 13:01:06.276117385 +0000 UTC m=+9.010971048" lastFinishedPulling="2026-02-23 13:01:08.406524444 +0000 UTC m=+11.141378087" observedRunningTime="2026-02-23 13:01:08.679275542 +0000 UTC m=+11.414129185" watchObservedRunningTime="2026-02-23 13:01:09.697173482 +0000 UTC m=+12.432027125" Feb 23 13:01:09.697574 master-0 kubenswrapper[7784]: I0223 13:01:09.697458 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" podStartSLOduration=1.994505658 podStartE2EDuration="4.697451079s" podCreationTimestamp="2026-02-23 13:01:05 +0000 UTC" firstStartedPulling="2026-02-23 13:01:06.308555217 +0000 UTC m=+9.043408860" lastFinishedPulling="2026-02-23 13:01:09.011500628 +0000 UTC m=+11.746354281" observedRunningTime="2026-02-23 13:01:09.696300781 +0000 UTC m=+12.431154434" watchObservedRunningTime="2026-02-23 13:01:09.697451079 +0000 UTC m=+12.432304722" Feb 23 13:01:10.686260 master-0 kubenswrapper[7784]: I0223 13:01:10.686030 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerStarted","Data":"7cf32cc15b30cd0a472deb261e78baeaf04608bdbd83cf83d235fb4d4ea8600c"} Feb 23 13:01:14.341221 master-0 kubenswrapper[7784]: I0223 13:01:14.340610 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:14.341221 master-0 kubenswrapper[7784]: I0223 13:01:14.341227 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341290 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.340902 7784 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341387 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341443 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341509 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls podName:9bed6748-374e-4d8a-92a0-36d7d735d6b7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.34145892 +0000 UTC m=+33.076312603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-gjp8h" (UID: "9bed6748-374e-4d8a-92a0-36d7d735d6b7") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341558 7784 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341613 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert podName:a04058be-6928-48c4-a71e-bd9e6427c097 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.341594604 +0000 UTC m=+33.076448287 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert") pod "cluster-version-operator-5cfd9759cf-lphxz" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341559 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341657 7784 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341707 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls podName:878aa813-a8b9-4a6f-8086-778df276d0d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.341689286 +0000 UTC m=+33.076542959 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls") pod "ingress-operator-6569778c84-k9h69" (UID: "878aa813-a8b9-4a6f-8086-778df276d0d7") : secret "metrics-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341658 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341727 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341760 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341769 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.341752937 +0000 UTC m=+33.076606610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "node-tuning-operator-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341828 7784 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341831 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341863 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics podName:35e97ed9-695d-483e-8878-4f231c79f1d2 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.34185162 +0000 UTC m=+33.076705293 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-588zk" (UID: "35e97ed9-695d-483e-8878-4f231c79f1d2") : secret "marketplace-operator-metrics" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.341890 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.341916 7784 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: I0223 13:01:14.342117 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342133 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls podName:f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342120817 +0000 UTC m=+33.076974490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls") pod "dns-operator-8c7d49845-g8fdn" (UID: "f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b") : secret "metrics-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342227 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342265 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342252621 +0000 UTC m=+33.077106294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342329 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342403 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342389144 +0000 UTC m=+33.077242817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342470 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342505 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342494896 +0000 UTC m=+33.077348569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342564 7784 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342598 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert podName:0d58817c-970f-47b1-a5a5-a491f3e93426 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342586599 +0000 UTC m=+33.077440272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-sj5wd" (UID: "0d58817c-970f-47b1-a5a5-a491f3e93426") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342664 7784 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:14.343141 master-0 kubenswrapper[7784]: E0223 13:01:14.342712 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls podName:92eaa2e2-61cd-4279-a81f-72db51308148 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.342694881 +0000 UTC m=+33.077548564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-shphl" (UID: "92eaa2e2-61cd-4279-a81f-72db51308148") : secret "image-registry-operator-tls" not found Feb 23 13:01:15.714580 master-0 kubenswrapper[7784]: I0223 13:01:15.714475 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" event={"ID":"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9","Type":"ContainerStarted","Data":"bf50e58fb96262a2da0270150de3bc7ed1ff7e9dd4f82079fe11e7f3e00ec9c7"} Feb 23 13:01:16.722513 master-0 kubenswrapper[7784]: I0223 13:01:16.722252 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qg27h" event={"ID":"b8bdbf92-61e3-41e9-a48d-4259cee80e9f","Type":"ContainerStarted","Data":"d35a7a7978779d5f0b2024d036894064b58874a31880eb71cd556985d99fd4fb"} Feb 23 13:01:17.740634 master-0 kubenswrapper[7784]: I0223 13:01:17.734310 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" event={"ID":"3daf0176-92e7-4642-8643-4afbefb77235","Type":"ContainerStarted","Data":"f205f47da789bb0655eaefd3fc629901d18927b18577bd859aed40fe66e3e22f"} Feb 23 13:01:18.601785 master-0 kubenswrapper[7784]: I0223 13:01:18.601237 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-9pltw"] Feb 23 13:01:18.602372 master-0 kubenswrapper[7784]: I0223 13:01:18.602307 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.604897 master-0 kubenswrapper[7784]: I0223 13:01:18.604841 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:01:18.605122 master-0 kubenswrapper[7784]: I0223 13:01:18.605055 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:01:18.605122 master-0 kubenswrapper[7784]: I0223 13:01:18.605087 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:01:18.606298 master-0 kubenswrapper[7784]: I0223 13:01:18.606260 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:01:18.618880 master-0 kubenswrapper[7784]: I0223 13:01:18.618822 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-9pltw"] Feb 23 13:01:18.712998 master-0 kubenswrapper[7784]: I0223 13:01:18.712891 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.712998 master-0 kubenswrapper[7784]: I0223 13:01:18.713028 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b825\" (UniqueName: \"kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.713569 master-0 kubenswrapper[7784]: I0223 13:01:18.713272 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.741977 master-0 kubenswrapper[7784]: I0223 13:01:18.741902 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" event={"ID":"f348bffa-b2f6-4695-88a7-923625e7fb02","Type":"ContainerStarted","Data":"4bf0acfb1627fed2922b1ade4afb1172158564f4516d958d55b369d98f788765"} Feb 23 13:01:18.745235 master-0 kubenswrapper[7784]: I0223 13:01:18.745171 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerStarted","Data":"664bed9a58a32d7def57d5398a174d1c1950d8f182a5fd20785e403d394c58a2"} Feb 23 13:01:18.815188 master-0 kubenswrapper[7784]: I0223 13:01:18.814937 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b825\" (UniqueName: \"kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.815188 master-0 kubenswrapper[7784]: I0223 13:01:18.815079 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.815566 master-0 kubenswrapper[7784]: I0223 13:01:18.815289 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.818093 master-0 kubenswrapper[7784]: I0223 13:01:18.818013 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.835451 master-0 kubenswrapper[7784]: I0223 13:01:18.829509 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.857096 master-0 kubenswrapper[7784]: I0223 13:01:18.853275 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b825\" (UniqueName: \"kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:18.922165 master-0 kubenswrapper[7784]: I0223 13:01:18.922085 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:01:19.196592 master-0 kubenswrapper[7784]: I0223 13:01:19.196070 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-9pltw"] Feb 23 13:01:19.311060 master-0 kubenswrapper[7784]: I0223 13:01:19.310985 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5"] Feb 23 13:01:19.311614 master-0 kubenswrapper[7784]: I0223 13:01:19.311579 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.314067 master-0 kubenswrapper[7784]: I0223 13:01:19.314021 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:19.314610 master-0 kubenswrapper[7784]: I0223 13:01:19.314554 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:01:19.314764 master-0 kubenswrapper[7784]: I0223 13:01:19.314728 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:19.314857 master-0 kubenswrapper[7784]: I0223 13:01:19.314821 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:01:19.317928 master-0 kubenswrapper[7784]: I0223 13:01:19.317875 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:01:19.319121 master-0 kubenswrapper[7784]: I0223 13:01:19.319074 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5"] Feb 23 13:01:19.426567 master-0 kubenswrapper[7784]: I0223 13:01:19.426478 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.426951 master-0 kubenswrapper[7784]: I0223 13:01:19.426745 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bnk\" (UniqueName: \"kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.427049 master-0 kubenswrapper[7784]: I0223 13:01:19.426997 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.427164 master-0 kubenswrapper[7784]: I0223 13:01:19.427133 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.528712 master-0 kubenswrapper[7784]: I0223 13:01:19.528640 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.528712 master-0 kubenswrapper[7784]: I0223 13:01:19.528722 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.529073 master-0 kubenswrapper[7784]: I0223 13:01:19.528805 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.529073 master-0 kubenswrapper[7784]: E0223 13:01:19.529040 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:19.529156 master-0 kubenswrapper[7784]: I0223 13:01:19.529111 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bnk\" (UniqueName: \"kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.529200 master-0 kubenswrapper[7784]: E0223 13:01:19.529157 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.02911846 +0000 UTC m=+22.763972143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : secret "serving-cert" not found Feb 23 13:01:19.529253 master-0 kubenswrapper[7784]: E0223 13:01:19.529226 7784 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:19.529383 master-0 kubenswrapper[7784]: E0223 13:01:19.529315 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.029287734 +0000 UTC m=+22.764141487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : configmap "client-ca" not found Feb 23 13:01:19.529652 master-0 kubenswrapper[7784]: E0223 13:01:19.529591 7784 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 23 13:01:19.529770 master-0 kubenswrapper[7784]: E0223 13:01:19.529732 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.029696985 +0000 UTC m=+22.764550648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : configmap "config" not found Feb 23 13:01:19.557635 master-0 kubenswrapper[7784]: I0223 13:01:19.557567 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bnk\" (UniqueName: \"kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:19.757427 master-0 kubenswrapper[7784]: I0223 13:01:19.757313 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" event={"ID":"f2c50f9a-8c73-4cb9-9cbf-2565496212a6","Type":"ContainerStarted","Data":"ea1eb72990e94dc776f51ab63d27faea76bd89ac6903bb508a9edd4321ae5a8a"} Feb 23 13:01:19.757427 master-0 kubenswrapper[7784]: I0223 13:01:19.757405 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" event={"ID":"f2c50f9a-8c73-4cb9-9cbf-2565496212a6","Type":"ContainerStarted","Data":"019a9bcf24ce5ea8628fb0a222b64597a0b233bcb8a8eee4032689bd4a953ff1"} Feb 23 13:01:19.759077 master-0 kubenswrapper[7784]: I0223 13:01:19.759035 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" event={"ID":"7d0a976c-1492-4989-a5ff-e386564dd6ba","Type":"ContainerStarted","Data":"c355e2c1c4f0e97e7c52c65af1c7679e829d5cd786200eccdf8b33d7cd15372a"} Feb 23 13:01:19.773605 master-0 kubenswrapper[7784]: I0223 13:01:19.773493 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" podStartSLOduration=1.773474037 podStartE2EDuration="1.773474037s" podCreationTimestamp="2026-02-23 13:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:19.77198073 +0000 UTC m=+22.506834383" watchObservedRunningTime="2026-02-23 13:01:19.773474037 +0000 UTC m=+22.508327700" Feb 23 13:01:19.888382 master-0 kubenswrapper[7784]: I0223 13:01:19.887443 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-579c55657-5nt22"] Feb 23 13:01:19.888382 master-0 kubenswrapper[7784]: I0223 13:01:19.887936 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:19.892365 master-0 kubenswrapper[7784]: I0223 13:01:19.891810 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:19.897399 master-0 kubenswrapper[7784]: I0223 13:01:19.895292 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:19.897399 master-0 kubenswrapper[7784]: I0223 13:01:19.895515 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:19.897399 master-0 kubenswrapper[7784]: I0223 13:01:19.895538 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:19.897399 master-0 kubenswrapper[7784]: I0223 13:01:19.895743 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:19.917608 master-0 kubenswrapper[7784]: I0223 13:01:19.914855 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-579c55657-5nt22"] Feb 23 13:01:19.917608 master-0 kubenswrapper[7784]: I0223 13:01:19.916419 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:19.941370 master-0 kubenswrapper[7784]: I0223 13:01:19.935673 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:19.941370 master-0 kubenswrapper[7784]: I0223 13:01:19.935714 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7v59\" (UniqueName: \"kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:19.941370 master-0 kubenswrapper[7784]: I0223 13:01:19.935785 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:19.941370 master-0 kubenswrapper[7784]: I0223 13:01:19.935847 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:19.941370 master-0 kubenswrapper[7784]: I0223 13:01:19.935875 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037094 master-0 kubenswrapper[7784]: I0223 13:01:20.036949 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037094 master-0 kubenswrapper[7784]: I0223 13:01:20.037032 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:20.037094 master-0 kubenswrapper[7784]: I0223 13:01:20.037057 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037094 master-0 kubenswrapper[7784]: I0223 13:01:20.037095 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:20.037458 master-0 kubenswrapper[7784]: I0223 13:01:20.037122 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037458 master-0 kubenswrapper[7784]: I0223 13:01:20.037175 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7v59\" (UniqueName: \"kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037458 master-0 kubenswrapper[7784]: I0223 13:01:20.037228 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.037458 master-0 kubenswrapper[7784]: I0223 13:01:20.037251 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:20.037458 master-0 kubenswrapper[7784]: E0223 13:01:20.037414 7784 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:20.037605 master-0 kubenswrapper[7784]: E0223 13:01:20.037485 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.037462845 +0000 UTC m=+23.772316498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : configmap "client-ca" not found Feb 23 13:01:20.037832 master-0 kubenswrapper[7784]: E0223 13:01:20.037800 7784 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 23 13:01:20.037963 master-0 kubenswrapper[7784]: E0223 13:01:20.037952 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.037926357 +0000 UTC m=+23.772780000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : configmap "config" not found Feb 23 13:01:20.038052 master-0 kubenswrapper[7784]: E0223 13:01:20.038026 7784 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:20.038090 master-0 kubenswrapper[7784]: E0223 13:01:20.038069 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.538057521 +0000 UTC m=+23.272911174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : secret "serving-cert" not found Feb 23 13:01:20.038145 master-0 kubenswrapper[7784]: E0223 13:01:20.038133 7784 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:20.038225 master-0 kubenswrapper[7784]: E0223 13:01:20.038210 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.538198764 +0000 UTC m=+23.273052407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : configmap "client-ca" not found Feb 23 13:01:20.038308 master-0 kubenswrapper[7784]: E0223 13:01:20.038283 7784 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 23 13:01:20.038372 master-0 kubenswrapper[7784]: E0223 13:01:20.038321 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.538311317 +0000 UTC m=+23.273164970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : configmap "config" not found Feb 23 13:01:20.038508 master-0 kubenswrapper[7784]: E0223 13:01:20.038493 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:20.038583 master-0 kubenswrapper[7784]: E0223 13:01:20.038575 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.038566563 +0000 UTC m=+23.773420206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : secret "serving-cert" not found Feb 23 13:01:20.040173 master-0 kubenswrapper[7784]: I0223 13:01:20.040127 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.054625 master-0 kubenswrapper[7784]: I0223 13:01:20.054575 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7v59\" (UniqueName: \"kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.297960 master-0 kubenswrapper[7784]: I0223 13:01:20.297816 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:20.298462 master-0 kubenswrapper[7784]: I0223 13:01:20.298444 7784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:20.319946 master-0 kubenswrapper[7784]: I0223 13:01:20.319862 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:01:20.542850 master-0 kubenswrapper[7784]: I0223 13:01:20.542786 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.543054 master-0 kubenswrapper[7784]: I0223 13:01:20.542887 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.543054 master-0 kubenswrapper[7784]: I0223 13:01:20.542966 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:20.543177 master-0 kubenswrapper[7784]: E0223 13:01:20.543147 7784 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 23 13:01:20.543243 master-0 kubenswrapper[7784]: E0223 13:01:20.543222 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.543199816 +0000 UTC m=+24.278053459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : configmap "config" not found Feb 23 13:01:20.543290 master-0 kubenswrapper[7784]: E0223 13:01:20.543273 7784 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:20.543321 master-0 kubenswrapper[7784]: E0223 13:01:20.543299 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.543290978 +0000 UTC m=+24.278144631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : configmap "client-ca" not found Feb 23 13:01:20.547363 master-0 kubenswrapper[7784]: E0223 13:01:20.543426 7784 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:20.547363 master-0 kubenswrapper[7784]: E0223 13:01:20.543464 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.543455292 +0000 UTC m=+24.278308935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : secret "serving-cert" not found Feb 23 13:01:20.909793 master-0 kubenswrapper[7784]: I0223 13:01:20.909733 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-579c55657-5nt22"] Feb 23 13:01:20.910686 master-0 kubenswrapper[7784]: E0223 13:01:20.910008 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-579c55657-5nt22" podUID="24ac0791-1867-42fa-a312-15fe8489e6f4" Feb 23 13:01:20.936683 master-0 kubenswrapper[7784]: I0223 13:01:20.936607 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5"] Feb 23 13:01:20.936986 master-0 kubenswrapper[7784]: E0223 13:01:20.936896 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" podUID="b111c2b6-8365-42fc-ae42-317d7b84bb57" Feb 23 13:01:21.051096 master-0 kubenswrapper[7784]: I0223 13:01:21.051004 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.051096 master-0 kubenswrapper[7784]: I0223 13:01:21.051094 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.051773 master-0 kubenswrapper[7784]: I0223 13:01:21.051423 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.051773 master-0 kubenswrapper[7784]: E0223 13:01:21.051453 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:21.051773 master-0 kubenswrapper[7784]: E0223 13:01:21.051596 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert podName:b111c2b6-8365-42fc-ae42-317d7b84bb57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.051560831 +0000 UTC m=+25.786414514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert") pod "route-controller-manager-7bcb58f8c7-q6sj5" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57") : secret "serving-cert" not found Feb 23 13:01:21.052864 master-0 kubenswrapper[7784]: I0223 13:01:21.052413 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.052864 master-0 kubenswrapper[7784]: I0223 13:01:21.052776 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"route-controller-manager-7bcb58f8c7-q6sj5\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.558168 master-0 kubenswrapper[7784]: I0223 13:01:21.558103 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.558414 master-0 kubenswrapper[7784]: I0223 13:01:21.558381 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.558672 master-0 kubenswrapper[7784]: E0223 13:01:21.558616 7784 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:21.558850 master-0 kubenswrapper[7784]: I0223 13:01:21.558780 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.558936 master-0 kubenswrapper[7784]: E0223 13:01:21.558889 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert podName:24ac0791-1867-42fa-a312-15fe8489e6f4 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.558829589 +0000 UTC m=+26.293683232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert") pod "controller-manager-579c55657-5nt22" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4") : secret "serving-cert" not found Feb 23 13:01:21.560041 master-0 kubenswrapper[7784]: I0223 13:01:21.559965 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.560188 master-0 kubenswrapper[7784]: I0223 13:01:21.560151 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"controller-manager-579c55657-5nt22\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.653568 master-0 kubenswrapper[7784]: I0223 13:01:21.653516 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl"] Feb 23 13:01:21.654382 master-0 kubenswrapper[7784]: I0223 13:01:21.654359 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.659037 master-0 kubenswrapper[7784]: I0223 13:01:21.658907 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:01:21.660178 master-0 kubenswrapper[7784]: I0223 13:01:21.660020 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:01:21.669025 master-0 kubenswrapper[7784]: I0223 13:01:21.668975 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:01:21.673082 master-0 kubenswrapper[7784]: I0223 13:01:21.673032 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl"] Feb 23 13:01:21.760298 master-0 kubenswrapper[7784]: I0223 13:01:21.760231 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xw4\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.760547 master-0 kubenswrapper[7784]: I0223 13:01:21.760309 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.760547 master-0 kubenswrapper[7784]: I0223 13:01:21.760376 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.760547 master-0 kubenswrapper[7784]: I0223 13:01:21.760463 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.760547 master-0 kubenswrapper[7784]: I0223 13:01:21.760465 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7"] Feb 23 13:01:21.760809 master-0 kubenswrapper[7784]: I0223 13:01:21.760750 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.761390 master-0 kubenswrapper[7784]: I0223 13:01:21.761359 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.763125 master-0 kubenswrapper[7784]: W0223 13:01:21.763088 7784 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: secrets "catalogserver-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Feb 23 13:01:21.763193 master-0 kubenswrapper[7784]: E0223 13:01:21.763150 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"catalogserver-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:01:21.764111 master-0 kubenswrapper[7784]: I0223 13:01:21.764071 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:01:21.764815 master-0 kubenswrapper[7784]: I0223 13:01:21.764794 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:01:21.766060 master-0 kubenswrapper[7784]: I0223 13:01:21.766026 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.766164 master-0 kubenswrapper[7784]: I0223 13:01:21.766145 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.777725 master-0 kubenswrapper[7784]: I0223 13:01:21.777674 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:01:21.780649 master-0 kubenswrapper[7784]: I0223 13:01:21.780277 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:21.785617 master-0 kubenswrapper[7784]: I0223 13:01:21.785572 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7"] Feb 23 13:01:21.790519 master-0 kubenswrapper[7784]: I0223 13:01:21.790274 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:21.861719 master-0 kubenswrapper[7784]: I0223 13:01:21.861571 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7v59\" (UniqueName: \"kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59\") pod \"24ac0791-1867-42fa-a312-15fe8489e6f4\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " Feb 23 13:01:21.861719 master-0 kubenswrapper[7784]: I0223 13:01:21.861626 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles\") pod \"24ac0791-1867-42fa-a312-15fe8489e6f4\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " Feb 23 13:01:21.861719 master-0 kubenswrapper[7784]: I0223 13:01:21.861655 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") pod \"b111c2b6-8365-42fc-ae42-317d7b84bb57\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " Feb 23 13:01:21.861719 master-0 kubenswrapper[7784]: I0223 13:01:21.861687 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") pod \"24ac0791-1867-42fa-a312-15fe8489e6f4\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " Feb 23 13:01:21.862603 master-0 kubenswrapper[7784]: I0223 13:01:21.862393 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") pod \"b111c2b6-8365-42fc-ae42-317d7b84bb57\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " Feb 23 13:01:21.862603 master-0 kubenswrapper[7784]: I0223 13:01:21.862489 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config" (OuterVolumeSpecName: "config") pod "b111c2b6-8365-42fc-ae42-317d7b84bb57" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.862603 master-0 kubenswrapper[7784]: I0223 13:01:21.862489 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "24ac0791-1867-42fa-a312-15fe8489e6f4" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.862603 master-0 kubenswrapper[7784]: I0223 13:01:21.862552 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7bnk\" (UniqueName: \"kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk\") pod \"b111c2b6-8365-42fc-ae42-317d7b84bb57\" (UID: \"b111c2b6-8365-42fc-ae42-317d7b84bb57\") " Feb 23 13:01:21.862603 master-0 kubenswrapper[7784]: I0223 13:01:21.862595 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") pod \"24ac0791-1867-42fa-a312-15fe8489e6f4\" (UID: \"24ac0791-1867-42fa-a312-15fe8489e6f4\") " Feb 23 13:01:21.863125 master-0 kubenswrapper[7784]: I0223 13:01:21.862842 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftvv\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863125 master-0 kubenswrapper[7784]: I0223 13:01:21.862921 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863125 master-0 kubenswrapper[7784]: I0223 13:01:21.862999 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863308 master-0 kubenswrapper[7784]: I0223 13:01:21.863168 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863308 master-0 kubenswrapper[7784]: I0223 13:01:21.863208 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863308 master-0 kubenswrapper[7784]: I0223 13:01:21.863257 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863530 master-0 kubenswrapper[7784]: I0223 13:01:21.863395 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863530 master-0 kubenswrapper[7784]: I0223 13:01:21.863423 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.863530 master-0 kubenswrapper[7784]: I0223 13:01:21.863444 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863530 master-0 kubenswrapper[7784]: I0223 13:01:21.863455 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "24ac0791-1867-42fa-a312-15fe8489e6f4" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.863530 master-0 kubenswrapper[7784]: I0223 13:01:21.863443 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863561 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863545 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca" (OuterVolumeSpecName: "client-ca") pod "b111c2b6-8365-42fc-ae42-317d7b84bb57" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863600 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863703 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xw4\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863742 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863756 7784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863766 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.863867 master-0 kubenswrapper[7784]: I0223 13:01:21.863775 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b111c2b6-8365-42fc-ae42-317d7b84bb57-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.864318 master-0 kubenswrapper[7784]: I0223 13:01:21.864303 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config" (OuterVolumeSpecName: "config") pod "24ac0791-1867-42fa-a312-15fe8489e6f4" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.864434 master-0 kubenswrapper[7784]: I0223 13:01:21.864374 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.865460 master-0 kubenswrapper[7784]: I0223 13:01:21.865412 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59" (OuterVolumeSpecName: "kube-api-access-d7v59") pod "24ac0791-1867-42fa-a312-15fe8489e6f4" (UID: "24ac0791-1867-42fa-a312-15fe8489e6f4"). InnerVolumeSpecName "kube-api-access-d7v59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:21.871141 master-0 kubenswrapper[7784]: I0223 13:01:21.870413 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.871141 master-0 kubenswrapper[7784]: I0223 13:01:21.870609 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk" (OuterVolumeSpecName: "kube-api-access-k7bnk") pod "b111c2b6-8365-42fc-ae42-317d7b84bb57" (UID: "b111c2b6-8365-42fc-ae42-317d7b84bb57"). InnerVolumeSpecName "kube-api-access-k7bnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:21.882726 master-0 kubenswrapper[7784]: I0223 13:01:21.882667 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xw4\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.964560 master-0 kubenswrapper[7784]: I0223 13:01:21.964459 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.964560 master-0 kubenswrapper[7784]: I0223 13:01:21.964511 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.964560 master-0 kubenswrapper[7784]: I0223 13:01:21.964561 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftvv\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964604 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964620 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964655 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964701 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ac0791-1867-42fa-a312-15fe8489e6f4-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964714 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7bnk\" (UniqueName: \"kubernetes.io/projected/b111c2b6-8365-42fc-ae42-317d7b84bb57-kube-api-access-k7bnk\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.964729 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7v59\" (UniqueName: \"kubernetes.io/projected/24ac0791-1867-42fa-a312-15fe8489e6f4-kube-api-access-d7v59\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.965720 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.966373 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.967200 master-0 kubenswrapper[7784]: I0223 13:01:21.966546 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.969625 master-0 kubenswrapper[7784]: I0223 13:01:21.969548 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:21.971550 master-0 kubenswrapper[7784]: I0223 13:01:21.971372 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:21.999146 master-0 kubenswrapper[7784]: I0223 13:01:21.999080 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftvv\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:22.248068 master-0 kubenswrapper[7784]: I0223 13:01:22.247977 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl"] Feb 23 13:01:22.677453 master-0 kubenswrapper[7784]: I0223 13:01:22.670053 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:01:22.682975 master-0 kubenswrapper[7784]: I0223 13:01:22.682908 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:22.771215 master-0 kubenswrapper[7784]: I0223 13:01:22.771160 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-579c55657-5nt22" Feb 23 13:01:22.774377 master-0 kubenswrapper[7784]: I0223 13:01:22.772554 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" event={"ID":"6ff7868e-f0d3-4c63-901f-fed11d623cf1","Type":"ContainerStarted","Data":"4cc3ecc5feacb9f931479e4483246f1ec0ef16491cc14ad9cd0c596a2b97f27d"} Feb 23 13:01:22.774377 master-0 kubenswrapper[7784]: I0223 13:01:22.772632 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" event={"ID":"6ff7868e-f0d3-4c63-901f-fed11d623cf1","Type":"ContainerStarted","Data":"d6786dcf48d821a6321a52c765c39223e7ae469bc0400a1737f59d9fc5cdb110"} Feb 23 13:01:22.774377 master-0 kubenswrapper[7784]: I0223 13:01:22.772695 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5" Feb 23 13:01:22.834206 master-0 kubenswrapper[7784]: I0223 13:01:22.833705 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-579c55657-5nt22"] Feb 23 13:01:22.840845 master-0 kubenswrapper[7784]: I0223 13:01:22.840808 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-579c55657-5nt22"] Feb 23 13:01:22.875856 master-0 kubenswrapper[7784]: I0223 13:01:22.875775 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5"] Feb 23 13:01:22.878369 master-0 kubenswrapper[7784]: I0223 13:01:22.878166 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ac0791-1867-42fa-a312-15fe8489e6f4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:22.885564 master-0 kubenswrapper[7784]: I0223 13:01:22.885499 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-q6sj5"] Feb 23 13:01:22.978702 master-0 kubenswrapper[7784]: I0223 13:01:22.978586 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:22.980354 master-0 kubenswrapper[7784]: I0223 13:01:22.979620 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b111c2b6-8365-42fc-ae42-317d7b84bb57-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:23.186696 master-0 kubenswrapper[7784]: I0223 13:01:23.186257 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7"] Feb 23 13:01:23.214279 master-0 kubenswrapper[7784]: W0223 13:01:23.214228 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce9f67d_0b27_41e3_ba4c_ed9cca25703e.slice/crio-fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428 WatchSource:0}: Error finding container fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428: Status 404 returned error can't find the container with id fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428 Feb 23 13:01:23.529152 master-0 kubenswrapper[7784]: I0223 13:01:23.527891 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ac0791-1867-42fa-a312-15fe8489e6f4" path="/var/lib/kubelet/pods/24ac0791-1867-42fa-a312-15fe8489e6f4/volumes" Feb 23 13:01:23.529152 master-0 kubenswrapper[7784]: I0223 13:01:23.528457 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b111c2b6-8365-42fc-ae42-317d7b84bb57" path="/var/lib/kubelet/pods/b111c2b6-8365-42fc-ae42-317d7b84bb57/volumes" Feb 23 13:01:23.780710 master-0 kubenswrapper[7784]: I0223 13:01:23.780567 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" event={"ID":"6ff7868e-f0d3-4c63-901f-fed11d623cf1","Type":"ContainerStarted","Data":"c436e936bbfe836d1e57728e42b98c0f389ba1e7c38e2e1049f9f6d51e136292"} Feb 23 13:01:23.780710 master-0 kubenswrapper[7784]: I0223 13:01:23.780712 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:23.783816 master-0 kubenswrapper[7784]: I0223 13:01:23.783768 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" event={"ID":"fce9f67d-0b27-41e3-ba4c-ed9cca25703e","Type":"ContainerStarted","Data":"d249745523695601f887a8698e1ad99347f7c0f390b57c191ff627979ced32b8"} Feb 23 13:01:23.783816 master-0 kubenswrapper[7784]: I0223 13:01:23.783815 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" event={"ID":"fce9f67d-0b27-41e3-ba4c-ed9cca25703e","Type":"ContainerStarted","Data":"954d5a92493834c76a7ee5510ec4359ae1890858e1cef48f9f9b43630785ef3a"} Feb 23 13:01:23.783917 master-0 kubenswrapper[7784]: I0223 13:01:23.783826 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" event={"ID":"fce9f67d-0b27-41e3-ba4c-ed9cca25703e","Type":"ContainerStarted","Data":"fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428"} Feb 23 13:01:23.784415 master-0 kubenswrapper[7784]: I0223 13:01:23.784388 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:23.798458 master-0 kubenswrapper[7784]: I0223 13:01:23.798376 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podStartSLOduration=2.7983151790000003 podStartE2EDuration="2.798315179s" podCreationTimestamp="2026-02-23 13:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:23.798188506 +0000 UTC m=+26.533042149" watchObservedRunningTime="2026-02-23 13:01:23.798315179 +0000 UTC m=+26.533168822" Feb 23 13:01:24.460404 master-0 kubenswrapper[7784]: I0223 13:01:24.459709 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podStartSLOduration=3.459667695 podStartE2EDuration="3.459667695s" podCreationTimestamp="2026-02-23 13:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:23.813661263 +0000 UTC m=+26.548514926" watchObservedRunningTime="2026-02-23 13:01:24.459667695 +0000 UTC m=+27.194521368" Feb 23 13:01:24.463228 master-0 kubenswrapper[7784]: I0223 13:01:24.463178 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:24.464528 master-0 kubenswrapper[7784]: I0223 13:01:24.464491 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.467138 master-0 kubenswrapper[7784]: I0223 13:01:24.467093 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:24.467291 master-0 kubenswrapper[7784]: I0223 13:01:24.467199 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:24.468138 master-0 kubenswrapper[7784]: I0223 13:01:24.468109 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:24.469148 master-0 kubenswrapper[7784]: I0223 13:01:24.468956 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:24.469148 master-0 kubenswrapper[7784]: I0223 13:01:24.469026 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:24.479259 master-0 kubenswrapper[7784]: I0223 13:01:24.479186 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:24.489480 master-0 kubenswrapper[7784]: I0223 13:01:24.481907 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:24.599972 master-0 kubenswrapper[7784]: I0223 13:01:24.599895 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.599972 master-0 kubenswrapper[7784]: I0223 13:01:24.599956 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.600365 master-0 kubenswrapper[7784]: I0223 13:01:24.600010 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.600365 master-0 kubenswrapper[7784]: I0223 13:01:24.600295 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.600480 master-0 kubenswrapper[7784]: I0223 13:01:24.600445 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9f46\" (UniqueName: \"kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.655639 master-0 kubenswrapper[7784]: I0223 13:01:24.655547 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j"] Feb 23 13:01:24.657408 master-0 kubenswrapper[7784]: I0223 13:01:24.657371 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.665468 master-0 kubenswrapper[7784]: I0223 13:01:24.665063 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j"] Feb 23 13:01:24.666203 master-0 kubenswrapper[7784]: I0223 13:01:24.665593 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:01:24.666203 master-0 kubenswrapper[7784]: I0223 13:01:24.665674 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:01:24.666203 master-0 kubenswrapper[7784]: I0223 13:01:24.665852 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:24.667470 master-0 kubenswrapper[7784]: I0223 13:01:24.667178 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:24.667470 master-0 kubenswrapper[7784]: I0223 13:01:24.667375 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:01:24.701159 master-0 kubenswrapper[7784]: I0223 13:01:24.701071 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.701159 master-0 kubenswrapper[7784]: I0223 13:01:24.701132 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.701511 master-0 kubenswrapper[7784]: I0223 13:01:24.701245 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.703350 master-0 kubenswrapper[7784]: I0223 13:01:24.703292 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.703409 master-0 kubenswrapper[7784]: I0223 13:01:24.703383 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxr8g\" (UniqueName: \"kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.703448 master-0 kubenswrapper[7784]: I0223 13:01:24.703435 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.703810 master-0 kubenswrapper[7784]: I0223 13:01:24.703741 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.703899 master-0 kubenswrapper[7784]: I0223 13:01:24.703861 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9f46\" (UniqueName: \"kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.703936 master-0 kubenswrapper[7784]: I0223 13:01:24.703909 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.705267 master-0 kubenswrapper[7784]: I0223 13:01:24.705207 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.705401 master-0 kubenswrapper[7784]: I0223 13:01:24.705368 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.706696 master-0 kubenswrapper[7784]: I0223 13:01:24.706644 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.711488 master-0 kubenswrapper[7784]: I0223 13:01:24.711383 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.732594 master-0 kubenswrapper[7784]: I0223 13:01:24.732519 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9f46\" (UniqueName: \"kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46\") pod \"controller-manager-75c5cddb8f-xkrds\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.797365 master-0 kubenswrapper[7784]: I0223 13:01:24.797284 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:24.804975 master-0 kubenswrapper[7784]: I0223 13:01:24.804906 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.805176 master-0 kubenswrapper[7784]: E0223 13:01:24.805143 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:24.805267 master-0 kubenswrapper[7784]: E0223 13:01:24.805249 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert podName:0d28aff4-b82a-4593-88f0-e59249baf316 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:25.305217645 +0000 UTC m=+28.040071298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert") pod "route-controller-manager-5bb8b89bff-xff2j" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316") : secret "serving-cert" not found Feb 23 13:01:24.805410 master-0 kubenswrapper[7784]: I0223 13:01:24.805367 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.805714 master-0 kubenswrapper[7784]: I0223 13:01:24.805630 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxr8g\" (UniqueName: \"kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.805816 master-0 kubenswrapper[7784]: I0223 13:01:24.805786 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.807040 master-0 kubenswrapper[7784]: I0223 13:01:24.806997 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.807199 master-0 kubenswrapper[7784]: I0223 13:01:24.807107 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:24.830107 master-0 kubenswrapper[7784]: I0223 13:01:24.830024 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxr8g\" (UniqueName: \"kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:25.023436 master-0 kubenswrapper[7784]: I0223 13:01:25.023363 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:25.034499 master-0 kubenswrapper[7784]: W0223 13:01:25.034431 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d78712_4b0d_41c2_b032_69354f80add1.slice/crio-e9d7a7a1022f0806a1ff5f0a77c69b4b858f7f242a175772e09937b152269037 WatchSource:0}: Error finding container e9d7a7a1022f0806a1ff5f0a77c69b4b858f7f242a175772e09937b152269037: Status 404 returned error can't find the container with id e9d7a7a1022f0806a1ff5f0a77c69b4b858f7f242a175772e09937b152269037 Feb 23 13:01:25.315763 master-0 kubenswrapper[7784]: I0223 13:01:25.315126 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:25.315763 master-0 kubenswrapper[7784]: E0223 13:01:25.315505 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:25.316710 master-0 kubenswrapper[7784]: E0223 13:01:25.316634 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert podName:0d28aff4-b82a-4593-88f0-e59249baf316 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:26.316571625 +0000 UTC m=+29.051425268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert") pod "route-controller-manager-5bb8b89bff-xff2j" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316") : secret "serving-cert" not found Feb 23 13:01:25.795240 master-0 kubenswrapper[7784]: I0223 13:01:25.795031 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" event={"ID":"86d78712-4b0d-41c2-b032-69354f80add1","Type":"ContainerStarted","Data":"e9d7a7a1022f0806a1ff5f0a77c69b4b858f7f242a175772e09937b152269037"} Feb 23 13:01:26.329473 master-0 kubenswrapper[7784]: I0223 13:01:26.328095 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:26.329473 master-0 kubenswrapper[7784]: E0223 13:01:26.328365 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:26.329473 master-0 kubenswrapper[7784]: E0223 13:01:26.328466 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert podName:0d28aff4-b82a-4593-88f0-e59249baf316 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:28.328440184 +0000 UTC m=+31.063293817 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert") pod "route-controller-manager-5bb8b89bff-xff2j" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316") : secret "serving-cert" not found Feb 23 13:01:28.360201 master-0 kubenswrapper[7784]: I0223 13:01:28.360109 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:28.360986 master-0 kubenswrapper[7784]: E0223 13:01:28.360397 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:28.360986 master-0 kubenswrapper[7784]: E0223 13:01:28.360465 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert podName:0d28aff4-b82a-4593-88f0-e59249baf316 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:32.360442381 +0000 UTC m=+35.095296034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert") pod "route-controller-manager-5bb8b89bff-xff2j" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316") : secret "serving-cert" not found Feb 23 13:01:30.162897 master-0 kubenswrapper[7784]: I0223 13:01:30.162147 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-fcd66d8b-df6ns"] Feb 23 13:01:30.165040 master-0 kubenswrapper[7784]: I0223 13:01:30.165016 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.171574 master-0 kubenswrapper[7784]: I0223 13:01:30.171520 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:01:30.171574 master-0 kubenswrapper[7784]: I0223 13:01:30.171543 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:01:30.172592 master-0 kubenswrapper[7784]: I0223 13:01:30.172554 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 23 13:01:30.173065 master-0 kubenswrapper[7784]: I0223 13:01:30.173036 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 23 13:01:30.173829 master-0 kubenswrapper[7784]: I0223 13:01:30.173791 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:01:30.174578 master-0 kubenswrapper[7784]: I0223 13:01:30.174323 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:30.176089 master-0 kubenswrapper[7784]: I0223 13:01:30.176039 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:01:30.176209 master-0 kubenswrapper[7784]: I0223 13:01:30.176044 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:01:30.176629 master-0 kubenswrapper[7784]: I0223 13:01:30.176603 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:01:30.191370 master-0 kubenswrapper[7784]: I0223 13:01:30.188229 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-fcd66d8b-df6ns"] Feb 23 13:01:30.194673 master-0 kubenswrapper[7784]: I0223 13:01:30.194188 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:01:30.274524 master-0 kubenswrapper[7784]: I0223 13:01:30.274451 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.274800 master-0 kubenswrapper[7784]: I0223 13:01:30.274647 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.274800 master-0 kubenswrapper[7784]: I0223 13:01:30.274703 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.274871 master-0 kubenswrapper[7784]: I0223 13:01:30.274814 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.274904 master-0 kubenswrapper[7784]: I0223 13:01:30.274883 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.274936 master-0 kubenswrapper[7784]: I0223 13:01:30.274906 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv2jf\" (UniqueName: \"kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.275215 master-0 kubenswrapper[7784]: I0223 13:01:30.275156 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.275261 master-0 kubenswrapper[7784]: I0223 13:01:30.275220 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.275261 master-0 kubenswrapper[7784]: I0223 13:01:30.275239 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.275389 master-0 kubenswrapper[7784]: I0223 13:01:30.275332 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.275389 master-0 kubenswrapper[7784]: I0223 13:01:30.275378 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.377018 master-0 kubenswrapper[7784]: I0223 13:01:30.376944 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:30.377018 master-0 kubenswrapper[7784]: I0223 13:01:30.377017 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:01:30.377287 master-0 kubenswrapper[7784]: E0223 13:01:30.377170 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:30.377321 master-0 kubenswrapper[7784]: I0223 13:01:30.377274 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:30.377503 master-0 kubenswrapper[7784]: E0223 13:01:30.377442 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:02.377318218 +0000 UTC m=+65.112171871 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:30.377592 master-0 kubenswrapper[7784]: I0223 13:01:30.377549 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:01:30.377760 master-0 kubenswrapper[7784]: E0223 13:01:30.377718 7784 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:30.377800 master-0 kubenswrapper[7784]: I0223 13:01:30.377776 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.377854 master-0 kubenswrapper[7784]: E0223 13:01:30.377830 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs podName:e941c759-ab95-4b30-a571-6c132ab0e639 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:02.37779619 +0000 UTC m=+65.112649873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs") pod "network-metrics-daemon-bbrcr" (UID: "e941c759-ab95-4b30-a571-6c132ab0e639") : secret "metrics-daemon-secret" not found Feb 23 13:01:30.377920 master-0 kubenswrapper[7784]: I0223 13:01:30.377886 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.377971 master-0 kubenswrapper[7784]: I0223 13:01:30.377945 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.378007 master-0 kubenswrapper[7784]: E0223 13:01:30.377986 7784 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:30.378103 master-0 kubenswrapper[7784]: E0223 13:01:30.378065 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.878049696 +0000 UTC m=+33.612903559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : secret "serving-cert" not found Feb 23 13:01:30.378170 master-0 kubenswrapper[7784]: I0223 13:01:30.377995 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:30.378401 master-0 kubenswrapper[7784]: I0223 13:01:30.378366 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:30.378524 master-0 kubenswrapper[7784]: I0223 13:01:30.378506 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:30.378636 master-0 kubenswrapper[7784]: I0223 13:01:30.378618 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.378734 master-0 kubenswrapper[7784]: I0223 13:01:30.378711 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.378882 master-0 kubenswrapper[7784]: I0223 13:01:30.378862 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379021 master-0 kubenswrapper[7784]: I0223 13:01:30.379004 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:30.379121 master-0 kubenswrapper[7784]: I0223 13:01:30.379091 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379217 master-0 kubenswrapper[7784]: I0223 13:01:30.379196 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:30.379318 master-0 kubenswrapper[7784]: I0223 13:01:30.379302 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379449 master-0 kubenswrapper[7784]: I0223 13:01:30.379431 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379554 master-0 kubenswrapper[7784]: I0223 13:01:30.379540 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:01:30.379677 master-0 kubenswrapper[7784]: I0223 13:01:30.379635 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379723 master-0 kubenswrapper[7784]: I0223 13:01:30.379641 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379806 master-0 kubenswrapper[7784]: I0223 13:01:30.379774 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:30.379884 master-0 kubenswrapper[7784]: I0223 13:01:30.379792 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.379998 master-0 kubenswrapper[7784]: I0223 13:01:30.379877 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv2jf\" (UniqueName: \"kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.380096 master-0 kubenswrapper[7784]: I0223 13:01:30.380079 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.380298 master-0 kubenswrapper[7784]: I0223 13:01:30.379568 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.380406 master-0 kubenswrapper[7784]: I0223 13:01:30.380291 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.380489 master-0 kubenswrapper[7784]: I0223 13:01:30.380454 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.380848 master-0 kubenswrapper[7784]: E0223 13:01:30.379887 7784 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:30.380848 master-0 kubenswrapper[7784]: E0223 13:01:30.380277 7784 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:30.380962 master-0 kubenswrapper[7784]: E0223 13:01:30.380886 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:30.880862757 +0000 UTC m=+33.615716400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : configmap "audit-0" not found Feb 23 13:01:30.381092 master-0 kubenswrapper[7784]: E0223 13:01:30.381059 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs podName:1b0122c7-1407-4a35-afcc-2c6b1225e830 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:02.381036141 +0000 UTC m=+65.115889794 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-rz2zl" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830") : secret "multus-admission-controller-secret" not found Feb 23 13:01:30.387385 master-0 kubenswrapper[7784]: I0223 13:01:30.387324 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lphxz\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:30.387385 master-0 kubenswrapper[7784]: I0223 13:01:30.387377 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:30.387880 master-0 kubenswrapper[7784]: I0223 13:01:30.387418 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:30.387965 master-0 kubenswrapper[7784]: I0223 13:01:30.387924 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:30.389360 master-0 kubenswrapper[7784]: I0223 13:01:30.389286 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:30.390033 master-0 kubenswrapper[7784]: I0223 13:01:30.389998 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:30.391471 master-0 kubenswrapper[7784]: I0223 13:01:30.391431 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.393421 master-0 kubenswrapper[7784]: I0223 13:01:30.392734 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.393421 master-0 kubenswrapper[7784]: I0223 13:01:30.392999 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:30.393828 master-0 kubenswrapper[7784]: I0223 13:01:30.393602 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:30.410581 master-0 kubenswrapper[7784]: I0223 13:01:30.406564 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv2jf\" (UniqueName: \"kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.563257 master-0 kubenswrapper[7784]: I0223 13:01:30.562821 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:30.563812 master-0 kubenswrapper[7784]: I0223 13:01:30.562856 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:01:30.564276 master-0 kubenswrapper[7784]: I0223 13:01:30.562927 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:01:30.564276 master-0 kubenswrapper[7784]: I0223 13:01:30.563011 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:01:30.564276 master-0 kubenswrapper[7784]: I0223 13:01:30.564109 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:30.564276 master-0 kubenswrapper[7784]: I0223 13:01:30.563054 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:01:30.564276 master-0 kubenswrapper[7784]: I0223 13:01:30.564230 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:01:30.869672 master-0 kubenswrapper[7784]: I0223 13:01:30.869608 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" event={"ID":"a04058be-6928-48c4-a71e-bd9e6427c097","Type":"ContainerStarted","Data":"caf590b2b92c1730b6806b3d56a7d6034f1571ee7f59de88a7b217b327e76afe"} Feb 23 13:01:30.874863 master-0 kubenswrapper[7784]: I0223 13:01:30.874691 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" event={"ID":"86d78712-4b0d-41c2-b032-69354f80add1","Type":"ContainerStarted","Data":"6bc72910d73e59da947d7208f518bf167822ca42ab3b5439af75035f87abbfdc"} Feb 23 13:01:30.875155 master-0 kubenswrapper[7784]: I0223 13:01:30.875101 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:30.877369 master-0 kubenswrapper[7784]: I0223 13:01:30.877211 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-k9h69"] Feb 23 13:01:30.903358 master-0 kubenswrapper[7784]: I0223 13:01:30.903074 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.903358 master-0 kubenswrapper[7784]: I0223 13:01:30.903150 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:30.903358 master-0 kubenswrapper[7784]: E0223 13:01:30.903288 7784 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:30.903358 master-0 kubenswrapper[7784]: E0223 13:01:30.903367 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:31.903332175 +0000 UTC m=+34.638185818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : secret "serving-cert" not found Feb 23 13:01:30.903984 master-0 kubenswrapper[7784]: E0223 13:01:30.903953 7784 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:30.903984 master-0 kubenswrapper[7784]: E0223 13:01:30.903984 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:31.903976151 +0000 UTC m=+34.638829784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : configmap "audit-0" not found Feb 23 13:01:30.908419 master-0 kubenswrapper[7784]: I0223 13:01:30.907835 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:30.909957 master-0 kubenswrapper[7784]: I0223 13:01:30.909901 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl"] Feb 23 13:01:30.930892 master-0 kubenswrapper[7784]: W0223 13:01:30.928716 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92eaa2e2_61cd_4279_a81f_72db51308148.slice/crio-a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532 WatchSource:0}: Error finding container a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532: Status 404 returned error can't find the container with id a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532 Feb 23 13:01:30.930892 master-0 kubenswrapper[7784]: I0223 13:01:30.930456 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" podStartSLOduration=2.111871922 podStartE2EDuration="6.930431163s" podCreationTimestamp="2026-02-23 13:01:24 +0000 UTC" firstStartedPulling="2026-02-23 13:01:25.037235432 +0000 UTC m=+27.772089095" lastFinishedPulling="2026-02-23 13:01:29.855794683 +0000 UTC m=+32.590648336" observedRunningTime="2026-02-23 13:01:30.929056629 +0000 UTC m=+33.663910272" watchObservedRunningTime="2026-02-23 13:01:30.930431163 +0000 UTC m=+33.665284796" Feb 23 13:01:30.943076 master-0 kubenswrapper[7784]: I0223 13:01:30.939706 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h"] Feb 23 13:01:30.966499 master-0 kubenswrapper[7784]: I0223 13:01:30.966428 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd"] Feb 23 13:01:31.016686 master-0 kubenswrapper[7784]: I0223 13:01:31.016624 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-g8fdn"] Feb 23 13:01:31.030214 master-0 kubenswrapper[7784]: W0223 13:01:31.030168 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d58817c_970f_47b1_a5a5_a491f3e93426.slice/crio-d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402 WatchSource:0}: Error finding container d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402: Status 404 returned error can't find the container with id d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402 Feb 23 13:01:31.052660 master-0 kubenswrapper[7784]: I0223 13:01:31.052616 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-588zk"] Feb 23 13:01:31.888818 master-0 kubenswrapper[7784]: I0223 13:01:31.888333 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" event={"ID":"9bed6748-374e-4d8a-92a0-36d7d735d6b7","Type":"ContainerStarted","Data":"6a5cd0e8536fcc54350ba490f0eb9ca59486f86834d7ae3d682b2a13eefc4e56"} Feb 23 13:01:31.892101 master-0 kubenswrapper[7784]: I0223 13:01:31.892035 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerStarted","Data":"b907614a9591efb37b88a7686e4a790de265f0304e777404050b8a95d8f70969"} Feb 23 13:01:31.893445 master-0 kubenswrapper[7784]: I0223 13:01:31.893407 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" event={"ID":"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b","Type":"ContainerStarted","Data":"f17488314313adf8e5d4ca3b5623c6439e87e4c15c926ef56dd3963870bb1fef"} Feb 23 13:01:31.894964 master-0 kubenswrapper[7784]: I0223 13:01:31.894924 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" event={"ID":"0d58817c-970f-47b1-a5a5-a491f3e93426","Type":"ContainerStarted","Data":"d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402"} Feb 23 13:01:31.896461 master-0 kubenswrapper[7784]: I0223 13:01:31.896432 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"fcbd0dfcd13ca5f8a8db77172cb144a3166d04c3140529e3b2606f791e557f0c"} Feb 23 13:01:31.898942 master-0 kubenswrapper[7784]: I0223 13:01:31.898666 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" event={"ID":"92eaa2e2-61cd-4279-a81f-72db51308148","Type":"ContainerStarted","Data":"a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532"} Feb 23 13:01:31.915991 master-0 kubenswrapper[7784]: I0223 13:01:31.915937 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:31.916177 master-0 kubenswrapper[7784]: I0223 13:01:31.916068 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:31.916221 master-0 kubenswrapper[7784]: E0223 13:01:31.916197 7784 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:31.916296 master-0 kubenswrapper[7784]: E0223 13:01:31.916275 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:33.916249071 +0000 UTC m=+36.651102714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : configmap "audit-0" not found Feb 23 13:01:31.917440 master-0 kubenswrapper[7784]: E0223 13:01:31.916931 7784 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:31.917440 master-0 kubenswrapper[7784]: E0223 13:01:31.916974 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:33.916963639 +0000 UTC m=+36.651817282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : secret "serving-cert" not found Feb 23 13:01:31.976296 master-0 kubenswrapper[7784]: I0223 13:01:31.976055 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:01:32.212600 master-0 kubenswrapper[7784]: I0223 13:01:32.212464 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:32.213677 master-0 kubenswrapper[7784]: I0223 13:01:32.213639 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.217053 master-0 kubenswrapper[7784]: I0223 13:01:32.217027 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 23 13:01:32.221599 master-0 kubenswrapper[7784]: I0223 13:01:32.221558 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:32.324035 master-0 kubenswrapper[7784]: I0223 13:01:32.323744 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.324263 master-0 kubenswrapper[7784]: I0223 13:01:32.324195 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.324314 master-0 kubenswrapper[7784]: I0223 13:01:32.324273 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.426974 master-0 kubenswrapper[7784]: I0223 13:01:32.426869 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.426974 master-0 kubenswrapper[7784]: I0223 13:01:32.426946 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") pod \"route-controller-manager-5bb8b89bff-xff2j\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:32.426974 master-0 kubenswrapper[7784]: I0223 13:01:32.426987 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.427256 master-0 kubenswrapper[7784]: I0223 13:01:32.427050 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.427256 master-0 kubenswrapper[7784]: E0223 13:01:32.427245 7784 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:32.427441 master-0 kubenswrapper[7784]: E0223 13:01:32.427313 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert podName:0d28aff4-b82a-4593-88f0-e59249baf316 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:40.427293374 +0000 UTC m=+43.162147037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert") pod "route-controller-manager-5bb8b89bff-xff2j" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316") : secret "serving-cert" not found Feb 23 13:01:32.427802 master-0 kubenswrapper[7784]: I0223 13:01:32.427718 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.427802 master-0 kubenswrapper[7784]: I0223 13:01:32.427756 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.468496 master-0 kubenswrapper[7784]: I0223 13:01:32.467663 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access\") pod \"installer-1-master-0\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.542579 master-0 kubenswrapper[7784]: I0223 13:01:32.542494 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:32.982049 master-0 kubenswrapper[7784]: I0223 13:01:32.981998 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:01:33.966077 master-0 kubenswrapper[7784]: I0223 13:01:33.966013 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:33.966327 master-0 kubenswrapper[7784]: I0223 13:01:33.966144 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") pod \"apiserver-fcd66d8b-df6ns\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:33.966327 master-0 kubenswrapper[7784]: E0223 13:01:33.966261 7784 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:33.966627 master-0 kubenswrapper[7784]: E0223 13:01:33.966595 7784 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:33.966791 master-0 kubenswrapper[7784]: E0223 13:01:33.966765 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:37.966702999 +0000 UTC m=+40.701556662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : secret "serving-cert" not found Feb 23 13:01:33.966942 master-0 kubenswrapper[7784]: E0223 13:01:33.966918 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit podName:66317b07-a84a-46b1-bc36-aca85060b214 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:37.966904564 +0000 UTC m=+40.701758217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit") pod "apiserver-fcd66d8b-df6ns" (UID: "66317b07-a84a-46b1-bc36-aca85060b214") : configmap "audit-0" not found Feb 23 13:01:35.069834 master-0 kubenswrapper[7784]: I0223 13:01:35.069316 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-fcd66d8b-df6ns"] Feb 23 13:01:35.070990 master-0 kubenswrapper[7784]: E0223 13:01:35.070067 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" podUID="66317b07-a84a-46b1-bc36-aca85060b214" Feb 23 13:01:35.327647 master-0 kubenswrapper[7784]: I0223 13:01:35.327482 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:35.337746 master-0 kubenswrapper[7784]: I0223 13:01:35.337677 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:35.340504 master-0 kubenswrapper[7784]: I0223 13:01:35.340455 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.344238 master-0 kubenswrapper[7784]: I0223 13:01:35.344176 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 23 13:01:35.400117 master-0 kubenswrapper[7784]: I0223 13:01:35.400055 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.400520 master-0 kubenswrapper[7784]: I0223 13:01:35.400364 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.400520 master-0 kubenswrapper[7784]: I0223 13:01:35.400472 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.501735 master-0 kubenswrapper[7784]: I0223 13:01:35.501644 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.503307 master-0 kubenswrapper[7784]: I0223 13:01:35.501835 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.503307 master-0 kubenswrapper[7784]: I0223 13:01:35.501969 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.503307 master-0 kubenswrapper[7784]: I0223 13:01:35.502125 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.503307 master-0 kubenswrapper[7784]: I0223 13:01:35.502619 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.530980 master-0 kubenswrapper[7784]: I0223 13:01:35.530925 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.676950 master-0 kubenswrapper[7784]: I0223 13:01:35.676778 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:35.916567 master-0 kubenswrapper[7784]: I0223 13:01:35.916479 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:35.928238 master-0 kubenswrapper[7784]: I0223 13:01:35.928113 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:36.011406 master-0 kubenswrapper[7784]: I0223 13:01:36.011287 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.011406 master-0 kubenswrapper[7784]: I0223 13:01:36.011375 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.011738 master-0 kubenswrapper[7784]: I0223 13:01:36.011441 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.011738 master-0 kubenswrapper[7784]: I0223 13:01:36.011486 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.011738 master-0 kubenswrapper[7784]: I0223 13:01:36.011478 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:36.012394 master-0 kubenswrapper[7784]: I0223 13:01:36.012203 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config" (OuterVolumeSpecName: "config") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.012394 master-0 kubenswrapper[7784]: I0223 13:01:36.012297 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.012548 master-0 kubenswrapper[7784]: I0223 13:01:36.012422 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv2jf\" (UniqueName: \"kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.012548 master-0 kubenswrapper[7784]: I0223 13:01:36.012472 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.012665 master-0 kubenswrapper[7784]: I0223 13:01:36.012574 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.012665 master-0 kubenswrapper[7784]: I0223 13:01:36.012647 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle\") pod \"66317b07-a84a-46b1-bc36-aca85060b214\" (UID: \"66317b07-a84a-46b1-bc36-aca85060b214\") " Feb 23 13:01:36.013019 master-0 kubenswrapper[7784]: I0223 13:01:36.012966 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013375 7784 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013413 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013630 7784 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013519 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013701 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.013987 master-0 kubenswrapper[7784]: I0223 13:01:36.013917 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.016266 master-0 kubenswrapper[7784]: I0223 13:01:36.016178 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:36.017099 master-0 kubenswrapper[7784]: I0223 13:01:36.016986 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf" (OuterVolumeSpecName: "kube-api-access-jv2jf") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "kube-api-access-jv2jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:36.017921 master-0 kubenswrapper[7784]: I0223 13:01:36.017840 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "66317b07-a84a-46b1-bc36-aca85060b214" (UID: "66317b07-a84a-46b1-bc36-aca85060b214"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:36.042409 master-0 kubenswrapper[7784]: I0223 13:01:36.041685 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114552 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv2jf\" (UniqueName: \"kubernetes.io/projected/66317b07-a84a-46b1-bc36-aca85060b214-kube-api-access-jv2jf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114598 7784 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114614 7784 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66317b07-a84a-46b1-bc36-aca85060b214-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114625 7784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114634 7784 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.114708 master-0 kubenswrapper[7784]: I0223 13:01:36.114642 7784 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.920371 master-0 kubenswrapper[7784]: I0223 13:01:36.920286 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fcd66d8b-df6ns" Feb 23 13:01:36.991469 master-0 kubenswrapper[7784]: I0223 13:01:36.988650 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9f44475c9-drjp5"] Feb 23 13:01:36.991469 master-0 kubenswrapper[7784]: I0223 13:01:36.989575 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.992055 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-fcd66d8b-df6ns"] Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.993179 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.993973 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.995739 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.996121 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.996601 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.996610 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.996754 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.996834 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:01:36.998086 master-0 kubenswrapper[7784]: I0223 13:01:36.997623 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:01:37.013008 master-0 kubenswrapper[7784]: I0223 13:01:37.012913 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:01:37.016324 master-0 kubenswrapper[7784]: I0223 13:01:37.016285 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-fcd66d8b-df6ns"] Feb 23 13:01:37.018062 master-0 kubenswrapper[7784]: I0223 13:01:37.018036 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9f44475c9-drjp5"] Feb 23 13:01:37.126585 master-0 kubenswrapper[7784]: I0223 13:01:37.126517 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.126585 master-0 kubenswrapper[7784]: I0223 13:01:37.126593 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127152 master-0 kubenswrapper[7784]: I0223 13:01:37.126770 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127152 master-0 kubenswrapper[7784]: I0223 13:01:37.126924 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127152 master-0 kubenswrapper[7784]: I0223 13:01:37.126974 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127152 master-0 kubenswrapper[7784]: I0223 13:01:37.127029 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127152 master-0 kubenswrapper[7784]: I0223 13:01:37.127109 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127298 master-0 kubenswrapper[7784]: I0223 13:01:37.127178 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl5r2\" (UniqueName: \"kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127298 master-0 kubenswrapper[7784]: I0223 13:01:37.127213 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127400 master-0 kubenswrapper[7784]: I0223 13:01:37.127300 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127400 master-0 kubenswrapper[7784]: I0223 13:01:37.127363 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.127526 master-0 kubenswrapper[7784]: I0223 13:01:37.127488 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66317b07-a84a-46b1-bc36-aca85060b214-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:37.127580 master-0 kubenswrapper[7784]: I0223 13:01:37.127528 7784 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66317b07-a84a-46b1-bc36-aca85060b214-audit\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228405 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228481 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228502 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228524 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228540 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228556 master-0 kubenswrapper[7784]: I0223 13:01:37.228555 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228915 master-0 kubenswrapper[7784]: I0223 13:01:37.228571 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228915 master-0 kubenswrapper[7784]: I0223 13:01:37.228587 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl5r2\" (UniqueName: \"kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228915 master-0 kubenswrapper[7784]: I0223 13:01:37.228603 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228915 master-0 kubenswrapper[7784]: I0223 13:01:37.228623 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.228915 master-0 kubenswrapper[7784]: I0223 13:01:37.228637 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.229333 master-0 kubenswrapper[7784]: I0223 13:01:37.229312 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.229791 master-0 kubenswrapper[7784]: I0223 13:01:37.229768 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.231355 master-0 kubenswrapper[7784]: I0223 13:01:37.231290 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.231759 master-0 kubenswrapper[7784]: I0223 13:01:37.231701 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.231800 master-0 kubenswrapper[7784]: I0223 13:01:37.231772 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.231870 master-0 kubenswrapper[7784]: I0223 13:01:37.231832 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.232789 master-0 kubenswrapper[7784]: I0223 13:01:37.232746 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.232988 master-0 kubenswrapper[7784]: I0223 13:01:37.232971 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.235263 master-0 kubenswrapper[7784]: I0223 13:01:37.234608 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.235319 master-0 kubenswrapper[7784]: I0223 13:01:37.235302 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.251819 master-0 kubenswrapper[7784]: I0223 13:01:37.251750 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl5r2\" (UniqueName: \"kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.317016 master-0 kubenswrapper[7784]: I0223 13:01:37.316948 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:37.520583 master-0 kubenswrapper[7784]: I0223 13:01:37.520522 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66317b07-a84a-46b1-bc36-aca85060b214" path="/var/lib/kubelet/pods/66317b07-a84a-46b1-bc36-aca85060b214/volumes" Feb 23 13:01:38.850917 master-0 kubenswrapper[7784]: I0223 13:01:38.850430 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:38.851623 master-0 kubenswrapper[7784]: I0223 13:01:38.851176 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" podUID="86d78712-4b0d-41c2-b032-69354f80add1" containerName="controller-manager" containerID="cri-o://6bc72910d73e59da947d7208f518bf167822ca42ab3b5439af75035f87abbfdc" gracePeriod=30 Feb 23 13:01:38.885741 master-0 kubenswrapper[7784]: I0223 13:01:38.885048 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j"] Feb 23 13:01:38.885741 master-0 kubenswrapper[7784]: E0223 13:01:38.885378 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" podUID="0d28aff4-b82a-4593-88f0-e59249baf316" Feb 23 13:01:38.931939 master-0 kubenswrapper[7784]: I0223 13:01:38.931880 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:38.971125 master-0 kubenswrapper[7784]: I0223 13:01:38.971052 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:39.060787 master-0 kubenswrapper[7784]: I0223 13:01:39.060716 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxr8g\" (UniqueName: \"kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g\") pod \"0d28aff4-b82a-4593-88f0-e59249baf316\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " Feb 23 13:01:39.061002 master-0 kubenswrapper[7784]: I0223 13:01:39.060875 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca\") pod \"0d28aff4-b82a-4593-88f0-e59249baf316\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " Feb 23 13:01:39.061002 master-0 kubenswrapper[7784]: I0223 13:01:39.060950 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config\") pod \"0d28aff4-b82a-4593-88f0-e59249baf316\" (UID: \"0d28aff4-b82a-4593-88f0-e59249baf316\") " Feb 23 13:01:39.061718 master-0 kubenswrapper[7784]: I0223 13:01:39.061666 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d28aff4-b82a-4593-88f0-e59249baf316" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:39.061919 master-0 kubenswrapper[7784]: I0223 13:01:39.061883 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config" (OuterVolumeSpecName: "config") pod "0d28aff4-b82a-4593-88f0-e59249baf316" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:39.067702 master-0 kubenswrapper[7784]: I0223 13:01:39.067665 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g" (OuterVolumeSpecName: "kube-api-access-gxr8g") pod "0d28aff4-b82a-4593-88f0-e59249baf316" (UID: "0d28aff4-b82a-4593-88f0-e59249baf316"). InnerVolumeSpecName "kube-api-access-gxr8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:39.162679 master-0 kubenswrapper[7784]: I0223 13:01:39.162474 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.162679 master-0 kubenswrapper[7784]: I0223 13:01:39.162522 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d28aff4-b82a-4593-88f0-e59249baf316-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.162679 master-0 kubenswrapper[7784]: I0223 13:01:39.162535 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxr8g\" (UniqueName: \"kubernetes.io/projected/0d28aff4-b82a-4593-88f0-e59249baf316-kube-api-access-gxr8g\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.942505 master-0 kubenswrapper[7784]: I0223 13:01:39.941259 7784 generic.go:334] "Generic (PLEG): container finished" podID="86d78712-4b0d-41c2-b032-69354f80add1" containerID="6bc72910d73e59da947d7208f518bf167822ca42ab3b5439af75035f87abbfdc" exitCode=0 Feb 23 13:01:39.942505 master-0 kubenswrapper[7784]: I0223 13:01:39.941708 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j" Feb 23 13:01:39.942505 master-0 kubenswrapper[7784]: I0223 13:01:39.942256 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" event={"ID":"86d78712-4b0d-41c2-b032-69354f80add1","Type":"ContainerDied","Data":"6bc72910d73e59da947d7208f518bf167822ca42ab3b5439af75035f87abbfdc"} Feb 23 13:01:40.055610 master-0 kubenswrapper[7784]: I0223 13:01:40.055280 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:40.072296 master-0 kubenswrapper[7784]: I0223 13:01:40.071845 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:40.103103 master-0 kubenswrapper[7784]: I0223 13:01:40.102611 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9f44475c9-drjp5"] Feb 23 13:01:40.108791 master-0 kubenswrapper[7784]: W0223 13:01:40.108736 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod29d3a080_c8a3_4359_9442_972bf4bb9b04.slice/crio-f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af WatchSource:0}: Error finding container f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af: Status 404 returned error can't find the container with id f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af Feb 23 13:01:40.229457 master-0 kubenswrapper[7784]: I0223 13:01:40.227779 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:01:40.243626 master-0 kubenswrapper[7784]: I0223 13:01:40.243547 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j"] Feb 23 13:01:40.243863 master-0 kubenswrapper[7784]: I0223 13:01:40.243831 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.246882 master-0 kubenswrapper[7784]: I0223 13:01:40.246036 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:01:40.249078 master-0 kubenswrapper[7784]: I0223 13:01:40.249028 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb8b89bff-xff2j"] Feb 23 13:01:40.250499 master-0 kubenswrapper[7784]: I0223 13:01:40.249312 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:01:40.250499 master-0 kubenswrapper[7784]: I0223 13:01:40.249448 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:01:40.250499 master-0 kubenswrapper[7784]: I0223 13:01:40.249627 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:01:40.257653 master-0 kubenswrapper[7784]: I0223 13:01:40.257619 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:40.257653 master-0 kubenswrapper[7784]: I0223 13:01:40.257642 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:40.259865 master-0 kubenswrapper[7784]: I0223 13:01:40.259735 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:40.382027 master-0 kubenswrapper[7784]: I0223 13:01:40.381556 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca\") pod \"86d78712-4b0d-41c2-b032-69354f80add1\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " Feb 23 13:01:40.382120 master-0 kubenswrapper[7784]: I0223 13:01:40.382045 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles\") pod \"86d78712-4b0d-41c2-b032-69354f80add1\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " Feb 23 13:01:40.382120 master-0 kubenswrapper[7784]: I0223 13:01:40.382114 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9f46\" (UniqueName: \"kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46\") pod \"86d78712-4b0d-41c2-b032-69354f80add1\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " Feb 23 13:01:40.382211 master-0 kubenswrapper[7784]: I0223 13:01:40.382181 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config\") pod \"86d78712-4b0d-41c2-b032-69354f80add1\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " Feb 23 13:01:40.382275 master-0 kubenswrapper[7784]: I0223 13:01:40.382221 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert\") pod \"86d78712-4b0d-41c2-b032-69354f80add1\" (UID: \"86d78712-4b0d-41c2-b032-69354f80add1\") " Feb 23 13:01:40.382422 master-0 kubenswrapper[7784]: I0223 13:01:40.382402 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.382480 master-0 kubenswrapper[7784]: I0223 13:01:40.382442 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.382480 master-0 kubenswrapper[7784]: I0223 13:01:40.382464 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpb8j\" (UniqueName: \"kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.382554 master-0 kubenswrapper[7784]: I0223 13:01:40.382498 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.382554 master-0 kubenswrapper[7784]: I0223 13:01:40.382544 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d28aff4-b82a-4593-88f0-e59249baf316-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.384079 master-0 kubenswrapper[7784]: I0223 13:01:40.383644 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "86d78712-4b0d-41c2-b032-69354f80add1" (UID: "86d78712-4b0d-41c2-b032-69354f80add1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:40.384079 master-0 kubenswrapper[7784]: I0223 13:01:40.383930 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config" (OuterVolumeSpecName: "config") pod "86d78712-4b0d-41c2-b032-69354f80add1" (UID: "86d78712-4b0d-41c2-b032-69354f80add1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:40.384561 master-0 kubenswrapper[7784]: I0223 13:01:40.384536 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca" (OuterVolumeSpecName: "client-ca") pod "86d78712-4b0d-41c2-b032-69354f80add1" (UID: "86d78712-4b0d-41c2-b032-69354f80add1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:40.393371 master-0 kubenswrapper[7784]: I0223 13:01:40.391333 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86d78712-4b0d-41c2-b032-69354f80add1" (UID: "86d78712-4b0d-41c2-b032-69354f80add1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:40.393371 master-0 kubenswrapper[7784]: I0223 13:01:40.392922 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46" (OuterVolumeSpecName: "kube-api-access-g9f46") pod "86d78712-4b0d-41c2-b032-69354f80add1" (UID: "86d78712-4b0d-41c2-b032-69354f80add1"). InnerVolumeSpecName "kube-api-access-g9f46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:40.484034 master-0 kubenswrapper[7784]: I0223 13:01:40.483970 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.484034 master-0 kubenswrapper[7784]: I0223 13:01:40.484036 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484059 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpb8j\" (UniqueName: \"kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484113 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484189 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9f46\" (UniqueName: \"kubernetes.io/projected/86d78712-4b0d-41c2-b032-69354f80add1-kube-api-access-g9f46\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484204 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484217 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d78712-4b0d-41c2-b032-69354f80add1-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484228 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.484286 master-0 kubenswrapper[7784]: I0223 13:01:40.484241 7784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86d78712-4b0d-41c2-b032-69354f80add1-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.487248 master-0 kubenswrapper[7784]: I0223 13:01:40.487198 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.488061 master-0 kubenswrapper[7784]: I0223 13:01:40.487888 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.488501 master-0 kubenswrapper[7784]: I0223 13:01:40.488459 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:40.950484 master-0 kubenswrapper[7784]: I0223 13:01:40.949805 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" event={"ID":"0d58817c-970f-47b1-a5a5-a491f3e93426","Type":"ContainerStarted","Data":"dc977fa44eb94c7d2786be97eca168973cfb38931e1f243a628741f8ff82c479"} Feb 23 13:01:40.952644 master-0 kubenswrapper[7784]: I0223 13:01:40.952579 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" event={"ID":"9bed6748-374e-4d8a-92a0-36d7d735d6b7","Type":"ContainerStarted","Data":"270411b6123c0bacae1888bf14d25e000dd1ac5e1c534263a9270c23cb683c46"} Feb 23 13:01:40.954720 master-0 kubenswrapper[7784]: I0223 13:01:40.954659 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerStarted","Data":"e4ed838542af022eb9712b2516ce0b1c3c0ca74d3f39f916a6f32d58ec0e24c3"} Feb 23 13:01:40.954917 master-0 kubenswrapper[7784]: I0223 13:01:40.954892 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:40.957826 master-0 kubenswrapper[7784]: I0223 13:01:40.957773 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"797b4e06-e895-4ccc-a8f8-9de5d3a6663f","Type":"ContainerStarted","Data":"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab"} Feb 23 13:01:40.957946 master-0 kubenswrapper[7784]: I0223 13:01:40.957846 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"797b4e06-e895-4ccc-a8f8-9de5d3a6663f","Type":"ContainerStarted","Data":"f29334ecf8fe0616ba9fdb0115a73378b0ec78d2a311daffbf6cd7a0e8642f2d"} Feb 23 13:01:40.959527 master-0 kubenswrapper[7784]: I0223 13:01:40.959501 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" event={"ID":"922e0be5-23c2-481e-89be-e918dc4ce90c","Type":"ContainerStarted","Data":"2306eee29ab96f13a7b6b9bf9f3a4b8c1be47a50f030b34cf5a3b0197274b3fb"} Feb 23 13:01:40.961958 master-0 kubenswrapper[7784]: I0223 13:01:40.961867 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"29d3a080-c8a3-4359-9442-972bf4bb9b04","Type":"ContainerStarted","Data":"dd4d5f4a0ab82fe5e433041fcf11c703ce19588ca738c6da0621782807f531c9"} Feb 23 13:01:40.961958 master-0 kubenswrapper[7784]: I0223 13:01:40.961920 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"29d3a080-c8a3-4359-9442-972bf4bb9b04","Type":"ContainerStarted","Data":"f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af"} Feb 23 13:01:40.961958 master-0 kubenswrapper[7784]: I0223 13:01:40.961963 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:01:40.964776 master-0 kubenswrapper[7784]: I0223 13:01:40.964735 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" event={"ID":"86d78712-4b0d-41c2-b032-69354f80add1","Type":"ContainerDied","Data":"e9d7a7a1022f0806a1ff5f0a77c69b4b858f7f242a175772e09937b152269037"} Feb 23 13:01:40.964776 master-0 kubenswrapper[7784]: I0223 13:01:40.964775 7784 scope.go:117] "RemoveContainer" containerID="6bc72910d73e59da947d7208f518bf167822ca42ab3b5439af75035f87abbfdc" Feb 23 13:01:40.965001 master-0 kubenswrapper[7784]: I0223 13:01:40.964888 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c5cddb8f-xkrds" Feb 23 13:01:40.969246 master-0 kubenswrapper[7784]: I0223 13:01:40.969164 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" event={"ID":"a04058be-6928-48c4-a71e-bd9e6427c097","Type":"ContainerStarted","Data":"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce"} Feb 23 13:01:40.972545 master-0 kubenswrapper[7784]: I0223 13:01:40.972496 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" event={"ID":"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b","Type":"ContainerStarted","Data":"0efe8ed105d609ccbf5c920ac1cec74a5c93e57f139bf56a861745a2c828326f"} Feb 23 13:01:40.972545 master-0 kubenswrapper[7784]: I0223 13:01:40.972540 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" event={"ID":"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b","Type":"ContainerStarted","Data":"79d9c050a497905f92c557d9e3a73abb9b6e32e43f180c9a4fe382f0efb43853"} Feb 23 13:01:40.975119 master-0 kubenswrapper[7784]: I0223 13:01:40.975075 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"6a67a29aa148146532272c68203bbb4d9ee58cf18e1c723a10729c18e8a3f4a9"} Feb 23 13:01:40.975119 master-0 kubenswrapper[7784]: I0223 13:01:40.975108 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"c8289f028a5b9b2ff9bd84ee035e05cf3ab1f61b8019dd41bc447fe370637ef6"} Feb 23 13:01:40.977353 master-0 kubenswrapper[7784]: I0223 13:01:40.977303 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" event={"ID":"92eaa2e2-61cd-4279-a81f-72db51308148","Type":"ContainerStarted","Data":"2e40109d34052395c159362b1fc60377679fbb682b53af5d56f614bb5eac078e"} Feb 23 13:01:41.137117 master-0 kubenswrapper[7784]: I0223 13:01:41.136510 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpb8j\" (UniqueName: \"kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j\") pod \"route-controller-manager-7ddc78cd76-b9j57\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:41.142912 master-0 kubenswrapper[7784]: I0223 13:01:41.142788 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=9.142763504 podStartE2EDuration="9.142763504s" podCreationTimestamp="2026-02-23 13:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:41.141656306 +0000 UTC m=+43.876509949" watchObservedRunningTime="2026-02-23 13:01:41.142763504 +0000 UTC m=+43.877617147" Feb 23 13:01:41.217446 master-0 kubenswrapper[7784]: I0223 13:01:41.213226 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:41.349908 master-0 kubenswrapper[7784]: I0223 13:01:41.348660 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:41.357641 master-0 kubenswrapper[7784]: I0223 13:01:41.357592 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75c5cddb8f-xkrds"] Feb 23 13:01:41.367634 master-0 kubenswrapper[7784]: I0223 13:01:41.366128 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-mjpd9"] Feb 23 13:01:41.367634 master-0 kubenswrapper[7784]: E0223 13:01:41.366369 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d78712-4b0d-41c2-b032-69354f80add1" containerName="controller-manager" Feb 23 13:01:41.367634 master-0 kubenswrapper[7784]: I0223 13:01:41.366388 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d78712-4b0d-41c2-b032-69354f80add1" containerName="controller-manager" Feb 23 13:01:41.367634 master-0 kubenswrapper[7784]: I0223 13:01:41.366460 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d78712-4b0d-41c2-b032-69354f80add1" containerName="controller-manager" Feb 23 13:01:41.367634 master-0 kubenswrapper[7784]: I0223 13:01:41.366898 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.492517 master-0 kubenswrapper[7784]: I0223 13:01:41.491522 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ljphn"] Feb 23 13:01:41.492517 master-0 kubenswrapper[7784]: I0223 13:01:41.492304 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.502592 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.504056 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.504939 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505124 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505793 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505862 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505892 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505916 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505933 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505954 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505973 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.505992 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506010 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506034 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6d4r\" (UniqueName: \"kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506055 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506083 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506111 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.509367 master-0 kubenswrapper[7784]: I0223 13:01:41.506132 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.545378 master-0 kubenswrapper[7784]: I0223 13:01:41.536634 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d28aff4-b82a-4593-88f0-e59249baf316" path="/var/lib/kubelet/pods/0d28aff4-b82a-4593-88f0-e59249baf316/volumes" Feb 23 13:01:41.545378 master-0 kubenswrapper[7784]: I0223 13:01:41.537450 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d78712-4b0d-41c2-b032-69354f80add1" path="/var/lib/kubelet/pods/86d78712-4b0d-41c2-b032-69354f80add1/volumes" Feb 23 13:01:41.545378 master-0 kubenswrapper[7784]: I0223 13:01:41.538049 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ljphn"] Feb 23 13:01:41.558527 master-0 kubenswrapper[7784]: I0223 13:01:41.556675 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:01:41.568210 master-0 kubenswrapper[7784]: W0223 13:01:41.567523 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fcd99e_272e_4877_a2ea_9492ad7f9689.slice/crio-566f21c578e2700740d08fec58d84fd003c6e58e1135212cf29c1610b5504891 WatchSource:0}: Error finding container 566f21c578e2700740d08fec58d84fd003c6e58e1135212cf29c1610b5504891: Status 404 returned error can't find the container with id 566f21c578e2700740d08fec58d84fd003c6e58e1135212cf29c1610b5504891 Feb 23 13:01:41.610968 master-0 kubenswrapper[7784]: I0223 13:01:41.610883 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.610968 master-0 kubenswrapper[7784]: I0223 13:01:41.610946 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.610968 master-0 kubenswrapper[7784]: I0223 13:01:41.610966 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611425 master-0 kubenswrapper[7784]: I0223 13:01:41.610992 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6cl\" (UniqueName: \"kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.611425 master-0 kubenswrapper[7784]: I0223 13:01:41.611013 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611425 master-0 kubenswrapper[7784]: I0223 13:01:41.611294 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611425 master-0 kubenswrapper[7784]: I0223 13:01:41.611416 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611550 master-0 kubenswrapper[7784]: I0223 13:01:41.611458 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611550 master-0 kubenswrapper[7784]: I0223 13:01:41.611491 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611550 master-0 kubenswrapper[7784]: I0223 13:01:41.611529 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6d4r\" (UniqueName: \"kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611635 master-0 kubenswrapper[7784]: I0223 13:01:41.611558 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611635 master-0 kubenswrapper[7784]: I0223 13:01:41.611585 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611695 master-0 kubenswrapper[7784]: I0223 13:01:41.611639 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611695 master-0 kubenswrapper[7784]: I0223 13:01:41.611661 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611752 master-0 kubenswrapper[7784]: I0223 13:01:41.611739 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.611788 master-0 kubenswrapper[7784]: I0223 13:01:41.611777 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.611818 master-0 kubenswrapper[7784]: I0223 13:01:41.611795 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.611894 master-0 kubenswrapper[7784]: I0223 13:01:41.611859 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612171 master-0 kubenswrapper[7784]: I0223 13:01:41.612142 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612238 master-0 kubenswrapper[7784]: I0223 13:01:41.612214 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612661 master-0 kubenswrapper[7784]: I0223 13:01:41.612629 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612661 master-0 kubenswrapper[7784]: I0223 13:01:41.612630 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612739 master-0 kubenswrapper[7784]: I0223 13:01:41.612666 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612822 master-0 kubenswrapper[7784]: I0223 13:01:41.612769 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612856 master-0 kubenswrapper[7784]: I0223 13:01:41.612779 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612890 master-0 kubenswrapper[7784]: I0223 13:01:41.612868 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.612890 master-0 kubenswrapper[7784]: I0223 13:01:41.612882 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.613050 master-0 kubenswrapper[7784]: I0223 13:01:41.613023 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.616588 master-0 kubenswrapper[7784]: I0223 13:01:41.616551 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.619432 master-0 kubenswrapper[7784]: I0223 13:01:41.619392 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.637437 master-0 kubenswrapper[7784]: I0223 13:01:41.637380 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6d4r\" (UniqueName: \"kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.706231 master-0 kubenswrapper[7784]: I0223 13:01:41.706170 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:01:41.713490 master-0 kubenswrapper[7784]: I0223 13:01:41.713445 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.713567 master-0 kubenswrapper[7784]: I0223 13:01:41.713491 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.713567 master-0 kubenswrapper[7784]: I0223 13:01:41.713543 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6cl\" (UniqueName: \"kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.714538 master-0 kubenswrapper[7784]: I0223 13:01:41.714488 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.721076 master-0 kubenswrapper[7784]: I0223 13:01:41.721031 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.724117 master-0 kubenswrapper[7784]: W0223 13:01:41.724066 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8422281d_af45_4f17_8f15_ac3fd9da4bbc.slice/crio-09689180eaf00be7c7758f7cc69a97d3ee4b4d1917b19c8876214c6f8f0aae87 WatchSource:0}: Error finding container 09689180eaf00be7c7758f7cc69a97d3ee4b4d1917b19c8876214c6f8f0aae87: Status 404 returned error can't find the container with id 09689180eaf00be7c7758f7cc69a97d3ee4b4d1917b19c8876214c6f8f0aae87 Feb 23 13:01:41.748446 master-0 kubenswrapper[7784]: I0223 13:01:41.745550 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6cl\" (UniqueName: \"kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.807676 master-0 kubenswrapper[7784]: I0223 13:01:41.807610 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rxc8b"] Feb 23 13:01:41.808182 master-0 kubenswrapper[7784]: I0223 13:01:41.808147 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:41.854977 master-0 kubenswrapper[7784]: I0223 13:01:41.854489 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:41.916419 master-0 kubenswrapper[7784]: I0223 13:01:41.916357 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64s6\" (UniqueName: \"kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:41.916419 master-0 kubenswrapper[7784]: I0223 13:01:41.916408 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:41.988263 master-0 kubenswrapper[7784]: I0223 13:01:41.988188 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" event={"ID":"8422281d-af45-4f17-8f15-ac3fd9da4bbc","Type":"ContainerStarted","Data":"33fb8b16d11c35215b92bb63271e852f3a508013edaf9bb22b3718b6250d5b71"} Feb 23 13:01:41.990383 master-0 kubenswrapper[7784]: I0223 13:01:41.988277 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" event={"ID":"8422281d-af45-4f17-8f15-ac3fd9da4bbc","Type":"ContainerStarted","Data":"09689180eaf00be7c7758f7cc69a97d3ee4b4d1917b19c8876214c6f8f0aae87"} Feb 23 13:01:41.997882 master-0 kubenswrapper[7784]: I0223 13:01:41.997817 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" event={"ID":"f3fcd99e-272e-4877-a2ea-9492ad7f9689","Type":"ContainerStarted","Data":"566f21c578e2700740d08fec58d84fd003c6e58e1135212cf29c1610b5504891"} Feb 23 13:01:42.020369 master-0 kubenswrapper[7784]: I0223 13:01:42.020079 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64s6\" (UniqueName: \"kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:42.020369 master-0 kubenswrapper[7784]: I0223 13:01:42.020138 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:42.020369 master-0 kubenswrapper[7784]: I0223 13:01:42.020230 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:42.035536 master-0 kubenswrapper[7784]: I0223 13:01:42.031297 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" podStartSLOduration=1.031270286 podStartE2EDuration="1.031270286s" podCreationTimestamp="2026-02-23 13:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:42.021642254 +0000 UTC m=+44.756495907" watchObservedRunningTime="2026-02-23 13:01:42.031270286 +0000 UTC m=+44.766123929" Feb 23 13:01:42.054918 master-0 kubenswrapper[7784]: I0223 13:01:42.054527 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64s6\" (UniqueName: \"kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:42.057640 master-0 kubenswrapper[7784]: I0223 13:01:42.057550 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=7.057530742 podStartE2EDuration="7.057530742s" podCreationTimestamp="2026-02-23 13:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:42.055858081 +0000 UTC m=+44.790711724" watchObservedRunningTime="2026-02-23 13:01:42.057530742 +0000 UTC m=+44.792384385" Feb 23 13:01:42.172692 master-0 kubenswrapper[7784]: I0223 13:01:42.172238 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:01:42.191457 master-0 kubenswrapper[7784]: I0223 13:01:42.191266 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ljphn"] Feb 23 13:01:42.205072 master-0 kubenswrapper[7784]: W0223 13:01:42.205025 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dc83a57_34c5_4c64_97d3_b6191ba690eb.slice/crio-145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752 WatchSource:0}: Error finding container 145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752: Status 404 returned error can't find the container with id 145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752 Feb 23 13:01:42.207916 master-0 kubenswrapper[7784]: W0223 13:01:42.207821 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acc6d35_5679_4fac_970f_3d2ff954cc33.slice/crio-a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9 WatchSource:0}: Error finding container a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9: Status 404 returned error can't find the container with id a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9 Feb 23 13:01:42.663746 master-0 kubenswrapper[7784]: I0223 13:01:42.663631 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:01:42.664673 master-0 kubenswrapper[7784]: I0223 13:01:42.664657 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.673040 master-0 kubenswrapper[7784]: I0223 13:01:42.672991 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:42.676213 master-0 kubenswrapper[7784]: I0223 13:01:42.676140 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:42.676482 master-0 kubenswrapper[7784]: I0223 13:01:42.676456 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:42.676638 master-0 kubenswrapper[7784]: I0223 13:01:42.676478 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:42.676708 master-0 kubenswrapper[7784]: I0223 13:01:42.676654 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:42.678122 master-0 kubenswrapper[7784]: I0223 13:01:42.677805 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:42.685754 master-0 kubenswrapper[7784]: I0223 13:01:42.685685 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:01:42.834953 master-0 kubenswrapper[7784]: I0223 13:01:42.834608 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.834953 master-0 kubenswrapper[7784]: I0223 13:01:42.834661 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.834953 master-0 kubenswrapper[7784]: I0223 13:01:42.834711 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.834953 master-0 kubenswrapper[7784]: I0223 13:01:42.834728 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2l9t\" (UniqueName: \"kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.834953 master-0 kubenswrapper[7784]: I0223 13:01:42.834749 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.935718 master-0 kubenswrapper[7784]: I0223 13:01:42.935579 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.935930 master-0 kubenswrapper[7784]: I0223 13:01:42.935880 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2l9t\" (UniqueName: \"kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.936005 master-0 kubenswrapper[7784]: I0223 13:01:42.935962 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.936234 master-0 kubenswrapper[7784]: I0223 13:01:42.936199 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.936274 master-0 kubenswrapper[7784]: I0223 13:01:42.936243 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.937741 master-0 kubenswrapper[7784]: I0223 13:01:42.937704 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.938329 master-0 kubenswrapper[7784]: I0223 13:01:42.938281 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.938425 master-0 kubenswrapper[7784]: I0223 13:01:42.938380 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.944447 master-0 kubenswrapper[7784]: I0223 13:01:42.944395 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:42.953199 master-0 kubenswrapper[7784]: I0223 13:01:42.953160 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2l9t\" (UniqueName: \"kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t\") pod \"controller-manager-5594d7855b-rgfnb\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:43.006714 master-0 kubenswrapper[7784]: I0223 13:01:43.006659 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:43.024313 master-0 kubenswrapper[7784]: I0223 13:01:43.024268 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rxc8b" event={"ID":"6dc83a57-34c5-4c64-97d3-b6191ba690eb","Type":"ContainerStarted","Data":"b9f7d190827a9f340c3d6b00d9d6be0f3faeb61b21ae8b1d46311e51a8b5475b"} Feb 23 13:01:43.024407 master-0 kubenswrapper[7784]: I0223 13:01:43.024322 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rxc8b" event={"ID":"6dc83a57-34c5-4c64-97d3-b6191ba690eb","Type":"ContainerStarted","Data":"145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752"} Feb 23 13:01:43.026430 master-0 kubenswrapper[7784]: I0223 13:01:43.026399 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ljphn" event={"ID":"2acc6d35-5679-4fac-970f-3d2ff954cc33","Type":"ContainerStarted","Data":"a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9"} Feb 23 13:01:43.043572 master-0 kubenswrapper[7784]: I0223 13:01:43.043506 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rxc8b" podStartSLOduration=2.043484714 podStartE2EDuration="2.043484714s" podCreationTimestamp="2026-02-23 13:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:43.041845352 +0000 UTC m=+45.776699105" watchObservedRunningTime="2026-02-23 13:01:43.043484714 +0000 UTC m=+45.778338347" Feb 23 13:01:43.092908 master-0 kubenswrapper[7784]: I0223 13:01:43.088839 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:01:43.092908 master-0 kubenswrapper[7784]: I0223 13:01:43.089988 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.093885 master-0 kubenswrapper[7784]: I0223 13:01:43.093019 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:01:43.104912 master-0 kubenswrapper[7784]: I0223 13:01:43.099614 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:01:43.204378 master-0 kubenswrapper[7784]: I0223 13:01:43.204258 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:01:43.241180 master-0 kubenswrapper[7784]: I0223 13:01:43.241143 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.241292 master-0 kubenswrapper[7784]: I0223 13:01:43.241196 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.241292 master-0 kubenswrapper[7784]: I0223 13:01:43.241226 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.342808 master-0 kubenswrapper[7784]: I0223 13:01:43.342629 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.342808 master-0 kubenswrapper[7784]: I0223 13:01:43.342695 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.342808 master-0 kubenswrapper[7784]: I0223 13:01:43.342744 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.343121 master-0 kubenswrapper[7784]: I0223 13:01:43.342818 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.343121 master-0 kubenswrapper[7784]: I0223 13:01:43.342820 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.363007 master-0 kubenswrapper[7784]: I0223 13:01:43.362960 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access\") pod \"installer-1-master-0\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.415161 master-0 kubenswrapper[7784]: I0223 13:01:43.415099 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:01:43.627042 master-0 kubenswrapper[7784]: I0223 13:01:43.623772 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2"] Feb 23 13:01:43.627042 master-0 kubenswrapper[7784]: I0223 13:01:43.624482 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.628834 master-0 kubenswrapper[7784]: I0223 13:01:43.628778 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:01:43.629514 master-0 kubenswrapper[7784]: I0223 13:01:43.629487 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:01:43.629922 master-0 kubenswrapper[7784]: I0223 13:01:43.629905 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:43.630235 master-0 kubenswrapper[7784]: I0223 13:01:43.630219 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:01:43.630526 master-0 kubenswrapper[7784]: I0223 13:01:43.630481 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:01:43.631869 master-0 kubenswrapper[7784]: I0223 13:01:43.630992 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:01:43.632042 master-0 kubenswrapper[7784]: I0223 13:01:43.632025 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:01:43.638993 master-0 kubenswrapper[7784]: I0223 13:01:43.633634 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:01:43.638993 master-0 kubenswrapper[7784]: I0223 13:01:43.636050 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2"] Feb 23 13:01:43.754128 master-0 kubenswrapper[7784]: I0223 13:01:43.754060 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cksnd\" (UniqueName: \"kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754128 master-0 kubenswrapper[7784]: I0223 13:01:43.754108 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754128 master-0 kubenswrapper[7784]: I0223 13:01:43.754132 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754566 master-0 kubenswrapper[7784]: I0223 13:01:43.754158 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754566 master-0 kubenswrapper[7784]: I0223 13:01:43.754453 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754566 master-0 kubenswrapper[7784]: I0223 13:01:43.754491 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754708 master-0 kubenswrapper[7784]: I0223 13:01:43.754583 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.754708 master-0 kubenswrapper[7784]: I0223 13:01:43.754626 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.855533 master-0 kubenswrapper[7784]: I0223 13:01:43.855484 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cksnd\" (UniqueName: \"kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.855533 master-0 kubenswrapper[7784]: I0223 13:01:43.855535 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.855797 master-0 kubenswrapper[7784]: I0223 13:01:43.855559 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.855797 master-0 kubenswrapper[7784]: I0223 13:01:43.855719 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.857528 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.857595 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.857668 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.857712 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.858036 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.858923 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.858927 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.861050 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.861393 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.862841 master-0 kubenswrapper[7784]: I0223 13:01:43.861618 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.869609 master-0 kubenswrapper[7784]: I0223 13:01:43.869570 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.881643 master-0 kubenswrapper[7784]: I0223 13:01:43.881517 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cksnd\" (UniqueName: \"kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:43.989880 master-0 kubenswrapper[7784]: I0223 13:01:43.989810 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:44.740598 master-0 kubenswrapper[7784]: I0223 13:01:44.740454 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:44.741554 master-0 kubenswrapper[7784]: I0223 13:01:44.740725 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" containerName="installer" containerID="cri-o://7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab" gracePeriod=30 Feb 23 13:01:45.037285 master-0 kubenswrapper[7784]: I0223 13:01:45.037178 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" event={"ID":"9b696cf8-3a0d-4b21-b620-308d0a11952a","Type":"ContainerStarted","Data":"5d6414c4ac09cc2d75e74f71aa67bd2beb985dfb94824f8a48bcacdac5efd244"} Feb 23 13:01:45.079130 master-0 kubenswrapper[7784]: I0223 13:01:45.079066 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:45.079980 master-0 kubenswrapper[7784]: I0223 13:01:45.079950 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.082970 master-0 kubenswrapper[7784]: I0223 13:01:45.082927 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:45.083512 master-0 kubenswrapper[7784]: I0223 13:01:45.083440 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:45.175381 master-0 kubenswrapper[7784]: I0223 13:01:45.175303 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.175381 master-0 kubenswrapper[7784]: I0223 13:01:45.175393 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.175715 master-0 kubenswrapper[7784]: I0223 13:01:45.175615 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.277105 master-0 kubenswrapper[7784]: I0223 13:01:45.276980 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.277105 master-0 kubenswrapper[7784]: I0223 13:01:45.277109 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.277507 master-0 kubenswrapper[7784]: I0223 13:01:45.277249 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.277748 master-0 kubenswrapper[7784]: I0223 13:01:45.277625 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.277748 master-0 kubenswrapper[7784]: I0223 13:01:45.277701 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.300763 master-0 kubenswrapper[7784]: I0223 13:01:45.300634 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:45.405278 master-0 kubenswrapper[7784]: I0223 13:01:45.403922 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:46.322491 master-0 kubenswrapper[7784]: I0223 13:01:46.321696 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:01:46.335257 master-0 kubenswrapper[7784]: W0223 13:01:46.333883 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod54b76471_bb9d_45a1_b3be_53e4f013e604.slice/crio-2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7 WatchSource:0}: Error finding container 2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7: Status 404 returned error can't find the container with id 2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7 Feb 23 13:01:46.389169 master-0 kubenswrapper[7784]: I0223 13:01:46.389108 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:46.402366 master-0 kubenswrapper[7784]: I0223 13:01:46.401354 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2"] Feb 23 13:01:47.054172 master-0 kubenswrapper[7784]: I0223 13:01:47.053945 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" event={"ID":"9b696cf8-3a0d-4b21-b620-308d0a11952a","Type":"ContainerStarted","Data":"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833"} Feb 23 13:01:47.054409 master-0 kubenswrapper[7784]: I0223 13:01:47.054383 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:47.057867 master-0 kubenswrapper[7784]: I0223 13:01:47.057823 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" event={"ID":"77ea2b54-bcc2-4c4e-9415-03984721b5b1","Type":"ContainerStarted","Data":"309d089bca6e7c97d1cbeac6a63a1ce937ecc0912c1d3b3166d1ba3db4f77535"} Feb 23 13:01:47.060921 master-0 kubenswrapper[7784]: I0223 13:01:47.060781 7784 generic.go:334] "Generic (PLEG): container finished" podID="922e0be5-23c2-481e-89be-e918dc4ce90c" containerID="d43691285e17b262ba50eeb68e1eefd1b056cc1972a6de9a447539cd5b864f7e" exitCode=0 Feb 23 13:01:47.061018 master-0 kubenswrapper[7784]: I0223 13:01:47.060929 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" event={"ID":"922e0be5-23c2-481e-89be-e918dc4ce90c","Type":"ContainerDied","Data":"d43691285e17b262ba50eeb68e1eefd1b056cc1972a6de9a447539cd5b864f7e"} Feb 23 13:01:47.066917 master-0 kubenswrapper[7784]: I0223 13:01:47.066863 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:47.072788 master-0 kubenswrapper[7784]: I0223 13:01:47.072656 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"54b76471-bb9d-45a1-b3be-53e4f013e604","Type":"ContainerStarted","Data":"886c9563273e1980f3ecc0464372fafeb8a67330b8225928122ba4af2f8bda52"} Feb 23 13:01:47.072788 master-0 kubenswrapper[7784]: I0223 13:01:47.072792 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"54b76471-bb9d-45a1-b3be-53e4f013e604","Type":"ContainerStarted","Data":"2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7"} Feb 23 13:01:47.073585 master-0 kubenswrapper[7784]: I0223 13:01:47.073510 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" podStartSLOduration=9.073485185 podStartE2EDuration="9.073485185s" podCreationTimestamp="2026-02-23 13:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:47.0704982 +0000 UTC m=+49.805351863" watchObservedRunningTime="2026-02-23 13:01:47.073485185 +0000 UTC m=+49.808338828" Feb 23 13:01:47.075110 master-0 kubenswrapper[7784]: I0223 13:01:47.075065 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"c09724e9-277a-4fb0-a6c2-8f18ecefad60","Type":"ContainerStarted","Data":"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc"} Feb 23 13:01:47.075194 master-0 kubenswrapper[7784]: I0223 13:01:47.075119 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"c09724e9-277a-4fb0-a6c2-8f18ecefad60","Type":"ContainerStarted","Data":"35a818ec3dd4aefac3281c1a7ddf64f96e3547a3060f7fe2a1805808d16db4f1"} Feb 23 13:01:47.082607 master-0 kubenswrapper[7784]: I0223 13:01:47.082544 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" event={"ID":"f3fcd99e-272e-4877-a2ea-9492ad7f9689","Type":"ContainerStarted","Data":"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d"} Feb 23 13:01:47.082874 master-0 kubenswrapper[7784]: I0223 13:01:47.082828 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:47.092817 master-0 kubenswrapper[7784]: I0223 13:01:47.092756 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:47.093726 master-0 kubenswrapper[7784]: I0223 13:01:47.093682 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ljphn" event={"ID":"2acc6d35-5679-4fac-970f-3d2ff954cc33","Type":"ContainerStarted","Data":"1f0299f3d20da364f17556940668892e9fef192f83e7c53e4b3ed0563f463f2b"} Feb 23 13:01:47.093811 master-0 kubenswrapper[7784]: I0223 13:01:47.093743 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ljphn" event={"ID":"2acc6d35-5679-4fac-970f-3d2ff954cc33","Type":"ContainerStarted","Data":"92bec4ec423f6d9f83012faf46143f36b9d9a04a3c31b5f624f065333158af16"} Feb 23 13:01:47.093907 master-0 kubenswrapper[7784]: I0223 13:01:47.093876 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:47.131690 master-0 kubenswrapper[7784]: I0223 13:01:47.131252 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" podStartSLOduration=4.848318508 podStartE2EDuration="9.13122948s" podCreationTimestamp="2026-02-23 13:01:38 +0000 UTC" firstStartedPulling="2026-02-23 13:01:41.580760448 +0000 UTC m=+44.315614081" lastFinishedPulling="2026-02-23 13:01:45.86367141 +0000 UTC m=+48.598525053" observedRunningTime="2026-02-23 13:01:47.12964842 +0000 UTC m=+49.864502063" watchObservedRunningTime="2026-02-23 13:01:47.13122948 +0000 UTC m=+49.866083113" Feb 23 13:01:47.142273 master-0 kubenswrapper[7784]: I0223 13:01:47.142215 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:47.142957 master-0 kubenswrapper[7784]: I0223 13:01:47.142933 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.147486 master-0 kubenswrapper[7784]: I0223 13:01:47.147283 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=4.147265841 podStartE2EDuration="4.147265841s" podCreationTimestamp="2026-02-23 13:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:47.145954439 +0000 UTC m=+49.880808082" watchObservedRunningTime="2026-02-23 13:01:47.147265841 +0000 UTC m=+49.882119504" Feb 23 13:01:47.166454 master-0 kubenswrapper[7784]: I0223 13:01:47.164378 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:47.174463 master-0 kubenswrapper[7784]: I0223 13:01:47.173407 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ljphn" podStartSLOduration=2.491980611 podStartE2EDuration="6.173383465s" podCreationTimestamp="2026-02-23 13:01:41 +0000 UTC" firstStartedPulling="2026-02-23 13:01:42.211832956 +0000 UTC m=+44.946686599" lastFinishedPulling="2026-02-23 13:01:45.89323581 +0000 UTC m=+48.628089453" observedRunningTime="2026-02-23 13:01:47.172732959 +0000 UTC m=+49.907586602" watchObservedRunningTime="2026-02-23 13:01:47.173383465 +0000 UTC m=+49.908237108" Feb 23 13:01:47.193014 master-0 kubenswrapper[7784]: I0223 13:01:47.192917 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.192890173 podStartE2EDuration="2.192890173s" podCreationTimestamp="2026-02-23 13:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:47.192649687 +0000 UTC m=+49.927503320" watchObservedRunningTime="2026-02-23 13:01:47.192890173 +0000 UTC m=+49.927743816" Feb 23 13:01:47.309177 master-0 kubenswrapper[7784]: I0223 13:01:47.309114 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.309415 master-0 kubenswrapper[7784]: I0223 13:01:47.309223 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.309415 master-0 kubenswrapper[7784]: I0223 13:01:47.309303 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.410702 master-0 kubenswrapper[7784]: I0223 13:01:47.410594 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.410702 master-0 kubenswrapper[7784]: I0223 13:01:47.410651 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.410702 master-0 kubenswrapper[7784]: I0223 13:01:47.410700 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.411394 master-0 kubenswrapper[7784]: I0223 13:01:47.411137 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.411394 master-0 kubenswrapper[7784]: I0223 13:01:47.411226 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.429264 master-0 kubenswrapper[7784]: I0223 13:01:47.429225 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.490239 master-0 kubenswrapper[7784]: I0223 13:01:47.490171 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:47.951266 master-0 kubenswrapper[7784]: I0223 13:01:47.951117 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:47.961879 master-0 kubenswrapper[7784]: W0223 13:01:47.961806 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod92ce0964_5b4d_4df4_88e5_74b5d00b706c.slice/crio-5053e6195512edf0ff85f7cc94b4039190f0c4f7ef5b08a5379248ea121168c5 WatchSource:0}: Error finding container 5053e6195512edf0ff85f7cc94b4039190f0c4f7ef5b08a5379248ea121168c5: Status 404 returned error can't find the container with id 5053e6195512edf0ff85f7cc94b4039190f0c4f7ef5b08a5379248ea121168c5 Feb 23 13:01:48.116216 master-0 kubenswrapper[7784]: I0223 13:01:48.115092 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" event={"ID":"922e0be5-23c2-481e-89be-e918dc4ce90c","Type":"ContainerStarted","Data":"d6497b8dfe1544132502ff6eb94d0121de97eda89f88a03d0bf5095d0744715c"} Feb 23 13:01:48.116216 master-0 kubenswrapper[7784]: I0223 13:01:48.115794 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" event={"ID":"922e0be5-23c2-481e-89be-e918dc4ce90c","Type":"ContainerStarted","Data":"4fbb94e368f85e0c7b5bd812d9885775c22fcdf2b41f1bb95fe8a41ebedc3735"} Feb 23 13:01:48.122843 master-0 kubenswrapper[7784]: I0223 13:01:48.122783 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"92ce0964-5b4d-4df4-88e5-74b5d00b706c","Type":"ContainerStarted","Data":"5053e6195512edf0ff85f7cc94b4039190f0c4f7ef5b08a5379248ea121168c5"} Feb 23 13:01:49.131905 master-0 kubenswrapper[7784]: I0223 13:01:49.131830 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"92ce0964-5b4d-4df4-88e5-74b5d00b706c","Type":"ContainerStarted","Data":"56d5310559a88ed0b40d62cd527190439470a0e322c8924670c0229575bd33cb"} Feb 23 13:01:49.151735 master-0 kubenswrapper[7784]: I0223 13:01:49.151585 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.151552494 podStartE2EDuration="2.151552494s" podCreationTimestamp="2026-02-23 13:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:49.151245666 +0000 UTC m=+51.886099349" watchObservedRunningTime="2026-02-23 13:01:49.151552494 +0000 UTC m=+51.886406177" Feb 23 13:01:49.155079 master-0 kubenswrapper[7784]: I0223 13:01:49.154982 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" podStartSLOduration=8.391168048 podStartE2EDuration="14.154965769s" podCreationTimestamp="2026-02-23 13:01:35 +0000 UTC" firstStartedPulling="2026-02-23 13:01:40.134173477 +0000 UTC m=+42.869027120" lastFinishedPulling="2026-02-23 13:01:45.897971198 +0000 UTC m=+48.632824841" observedRunningTime="2026-02-23 13:01:48.143871969 +0000 UTC m=+50.878725642" watchObservedRunningTime="2026-02-23 13:01:49.154965769 +0000 UTC m=+51.889819452" Feb 23 13:01:50.140713 master-0 kubenswrapper[7784]: I0223 13:01:50.140516 7784 generic.go:334] "Generic (PLEG): container finished" podID="77ea2b54-bcc2-4c4e-9415-03984721b5b1" containerID="40937a497e7a0d08e36a8702283e0f7ae419987db4a94fd11a6d5428287854b0" exitCode=0 Feb 23 13:01:50.141871 master-0 kubenswrapper[7784]: I0223 13:01:50.141823 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" event={"ID":"77ea2b54-bcc2-4c4e-9415-03984721b5b1","Type":"ContainerDied","Data":"40937a497e7a0d08e36a8702283e0f7ae419987db4a94fd11a6d5428287854b0"} Feb 23 13:01:51.147258 master-0 kubenswrapper[7784]: I0223 13:01:51.147183 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" event={"ID":"77ea2b54-bcc2-4c4e-9415-03984721b5b1","Type":"ContainerStarted","Data":"475d8750d93ea151a7fbf9832bf3df414c0dea69d78733d6dafa773d11ec4b49"} Feb 23 13:01:51.181588 master-0 kubenswrapper[7784]: I0223 13:01:51.180460 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" podStartSLOduration=5.057082836 podStartE2EDuration="8.180432051s" podCreationTimestamp="2026-02-23 13:01:43 +0000 UTC" firstStartedPulling="2026-02-23 13:01:46.427919154 +0000 UTC m=+49.162772797" lastFinishedPulling="2026-02-23 13:01:49.551268369 +0000 UTC m=+52.286122012" observedRunningTime="2026-02-23 13:01:51.177623261 +0000 UTC m=+53.912476904" watchObservedRunningTime="2026-02-23 13:01:51.180432051 +0000 UTC m=+53.915285694" Feb 23 13:01:52.318144 master-0 kubenswrapper[7784]: I0223 13:01:52.318055 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:52.318801 master-0 kubenswrapper[7784]: I0223 13:01:52.318188 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:52.325258 master-0 kubenswrapper[7784]: I0223 13:01:52.325212 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:53.162438 master-0 kubenswrapper[7784]: I0223 13:01:53.161428 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:01:53.990735 master-0 kubenswrapper[7784]: I0223 13:01:53.990670 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:53.991359 master-0 kubenswrapper[7784]: I0223 13:01:53.990754 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:53.999213 master-0 kubenswrapper[7784]: I0223 13:01:53.999171 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:54.097819 master-0 kubenswrapper[7784]: I0223 13:01:54.097748 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz"] Feb 23 13:01:54.098131 master-0 kubenswrapper[7784]: I0223 13:01:54.098038 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" podUID="a04058be-6928-48c4-a71e-bd9e6427c097" containerName="cluster-version-operator" containerID="cri-o://7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce" gracePeriod=130 Feb 23 13:01:54.168794 master-0 kubenswrapper[7784]: I0223 13:01:54.168699 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:01:54.263035 master-0 kubenswrapper[7784]: I0223 13:01:54.262984 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.333410 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") pod \"a04058be-6928-48c4-a71e-bd9e6427c097\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.333522 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") pod \"a04058be-6928-48c4-a71e-bd9e6427c097\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.333565 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") pod \"a04058be-6928-48c4-a71e-bd9e6427c097\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.333660 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") pod \"a04058be-6928-48c4-a71e-bd9e6427c097\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.333684 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") pod \"a04058be-6928-48c4-a71e-bd9e6427c097\" (UID: \"a04058be-6928-48c4-a71e-bd9e6427c097\") " Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.334007 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "a04058be-6928-48c4-a71e-bd9e6427c097" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:54.335564 master-0 kubenswrapper[7784]: I0223 13:01:54.334081 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "a04058be-6928-48c4-a71e-bd9e6427c097" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:54.337413 master-0 kubenswrapper[7784]: I0223 13:01:54.337273 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca" (OuterVolumeSpecName: "service-ca") pod "a04058be-6928-48c4-a71e-bd9e6427c097" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:54.341150 master-0 kubenswrapper[7784]: I0223 13:01:54.341098 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a04058be-6928-48c4-a71e-bd9e6427c097" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:54.342669 master-0 kubenswrapper[7784]: I0223 13:01:54.342619 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a04058be-6928-48c4-a71e-bd9e6427c097" (UID: "a04058be-6928-48c4-a71e-bd9e6427c097"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:54.435434 master-0 kubenswrapper[7784]: I0223 13:01:54.435358 7784 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:54.435434 master-0 kubenswrapper[7784]: I0223 13:01:54.435408 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a04058be-6928-48c4-a71e-bd9e6427c097-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:54.435434 master-0 kubenswrapper[7784]: I0223 13:01:54.435424 7784 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a04058be-6928-48c4-a71e-bd9e6427c097-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:54.435434 master-0 kubenswrapper[7784]: I0223 13:01:54.435440 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a04058be-6928-48c4-a71e-bd9e6427c097-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:54.435434 master-0 kubenswrapper[7784]: I0223 13:01:54.435452 7784 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a04058be-6928-48c4-a71e-bd9e6427c097-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:54.942600 master-0 kubenswrapper[7784]: I0223 13:01:54.942527 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:54.942929 master-0 kubenswrapper[7784]: I0223 13:01:54.942856 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" containerName="installer" containerID="cri-o://56d5310559a88ed0b40d62cd527190439470a0e322c8924670c0229575bd33cb" gracePeriod=30 Feb 23 13:01:55.176407 master-0 kubenswrapper[7784]: I0223 13:01:55.175824 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_92ce0964-5b4d-4df4-88e5-74b5d00b706c/installer/0.log" Feb 23 13:01:55.176407 master-0 kubenswrapper[7784]: I0223 13:01:55.175905 7784 generic.go:334] "Generic (PLEG): container finished" podID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" containerID="56d5310559a88ed0b40d62cd527190439470a0e322c8924670c0229575bd33cb" exitCode=1 Feb 23 13:01:55.176407 master-0 kubenswrapper[7784]: I0223 13:01:55.176057 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"92ce0964-5b4d-4df4-88e5-74b5d00b706c","Type":"ContainerDied","Data":"56d5310559a88ed0b40d62cd527190439470a0e322c8924670c0229575bd33cb"} Feb 23 13:01:55.179576 master-0 kubenswrapper[7784]: I0223 13:01:55.179499 7784 generic.go:334] "Generic (PLEG): container finished" podID="a04058be-6928-48c4-a71e-bd9e6427c097" containerID="7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce" exitCode=0 Feb 23 13:01:55.179669 master-0 kubenswrapper[7784]: I0223 13:01:55.179567 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" event={"ID":"a04058be-6928-48c4-a71e-bd9e6427c097","Type":"ContainerDied","Data":"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce"} Feb 23 13:01:55.179669 master-0 kubenswrapper[7784]: I0223 13:01:55.179611 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" Feb 23 13:01:55.179669 master-0 kubenswrapper[7784]: I0223 13:01:55.179623 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz" event={"ID":"a04058be-6928-48c4-a71e-bd9e6427c097","Type":"ContainerDied","Data":"caf590b2b92c1730b6806b3d56a7d6034f1571ee7f59de88a7b217b327e76afe"} Feb 23 13:01:55.179669 master-0 kubenswrapper[7784]: I0223 13:01:55.179668 7784 scope.go:117] "RemoveContainer" containerID="7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce" Feb 23 13:01:55.216117 master-0 kubenswrapper[7784]: I0223 13:01:55.211937 7784 scope.go:117] "RemoveContainer" containerID="7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce" Feb 23 13:01:55.216117 master-0 kubenswrapper[7784]: E0223 13:01:55.213690 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce\": container with ID starting with 7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce not found: ID does not exist" containerID="7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce" Feb 23 13:01:55.216117 master-0 kubenswrapper[7784]: I0223 13:01:55.215649 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce"} err="failed to get container status \"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce\": rpc error: code = NotFound desc = could not find container \"7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce\": container with ID starting with 7e404aa1689cfb7b18eaadbe33a571c2d720a36d654342851fcb76550d56f8ce not found: ID does not exist" Feb 23 13:01:55.240492 master-0 kubenswrapper[7784]: I0223 13:01:55.240404 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz"] Feb 23 13:01:55.242226 master-0 kubenswrapper[7784]: I0223 13:01:55.242165 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lphxz"] Feb 23 13:01:55.282620 master-0 kubenswrapper[7784]: I0223 13:01:55.282283 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-8jbxf"] Feb 23 13:01:55.282620 master-0 kubenswrapper[7784]: E0223 13:01:55.282570 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04058be-6928-48c4-a71e-bd9e6427c097" containerName="cluster-version-operator" Feb 23 13:01:55.282620 master-0 kubenswrapper[7784]: I0223 13:01:55.282591 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04058be-6928-48c4-a71e-bd9e6427c097" containerName="cluster-version-operator" Feb 23 13:01:55.282999 master-0 kubenswrapper[7784]: I0223 13:01:55.282726 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04058be-6928-48c4-a71e-bd9e6427c097" containerName="cluster-version-operator" Feb 23 13:01:55.283309 master-0 kubenswrapper[7784]: I0223 13:01:55.283272 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.287470 master-0 kubenswrapper[7784]: I0223 13:01:55.287128 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:01:55.287470 master-0 kubenswrapper[7784]: I0223 13:01:55.287364 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:01:55.288683 master-0 kubenswrapper[7784]: I0223 13:01:55.288657 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:01:55.341440 master-0 kubenswrapper[7784]: I0223 13:01:55.341401 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_92ce0964-5b4d-4df4-88e5-74b5d00b706c/installer/0.log" Feb 23 13:01:55.341776 master-0 kubenswrapper[7784]: I0223 13:01:55.341480 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:55.348708 master-0 kubenswrapper[7784]: I0223 13:01:55.348647 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.348771 master-0 kubenswrapper[7784]: I0223 13:01:55.348713 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.348771 master-0 kubenswrapper[7784]: I0223 13:01:55.348735 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.348771 master-0 kubenswrapper[7784]: I0223 13:01:55.348757 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.348895 master-0 kubenswrapper[7784]: I0223 13:01:55.348792 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.450160 master-0 kubenswrapper[7784]: I0223 13:01:55.449977 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access\") pod \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " Feb 23 13:01:55.450420 master-0 kubenswrapper[7784]: I0223 13:01:55.450165 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock\") pod \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " Feb 23 13:01:55.450420 master-0 kubenswrapper[7784]: I0223 13:01:55.450183 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir\") pod \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\" (UID: \"92ce0964-5b4d-4df4-88e5-74b5d00b706c\") " Feb 23 13:01:55.450420 master-0 kubenswrapper[7784]: I0223 13:01:55.450246 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock" (OuterVolumeSpecName: "var-lock") pod "92ce0964-5b4d-4df4-88e5-74b5d00b706c" (UID: "92ce0964-5b4d-4df4-88e5-74b5d00b706c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:55.450420 master-0 kubenswrapper[7784]: I0223 13:01:55.450324 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92ce0964-5b4d-4df4-88e5-74b5d00b706c" (UID: "92ce0964-5b4d-4df4-88e5-74b5d00b706c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:55.450420 master-0 kubenswrapper[7784]: I0223 13:01:55.450395 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.450752 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.450851 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.450993 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.451036 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.451097 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.451110 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:55.451226 master-0 kubenswrapper[7784]: I0223 13:01:55.451145 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451487 master-0 kubenswrapper[7784]: I0223 13:01:55.451397 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.451625 master-0 kubenswrapper[7784]: I0223 13:01:55.451590 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.454774 master-0 kubenswrapper[7784]: I0223 13:01:55.453936 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.456692 master-0 kubenswrapper[7784]: I0223 13:01:55.456264 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92ce0964-5b4d-4df4-88e5-74b5d00b706c" (UID: "92ce0964-5b4d-4df4-88e5-74b5d00b706c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:55.469331 master-0 kubenswrapper[7784]: I0223 13:01:55.469273 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.525188 master-0 kubenswrapper[7784]: I0223 13:01:55.525097 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04058be-6928-48c4-a71e-bd9e6427c097" path="/var/lib/kubelet/pods/a04058be-6928-48c4-a71e-bd9e6427c097/volumes" Feb 23 13:01:55.552297 master-0 kubenswrapper[7784]: I0223 13:01:55.552207 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ce0964-5b4d-4df4-88e5-74b5d00b706c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:55.633463 master-0 kubenswrapper[7784]: I0223 13:01:55.633379 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:01:55.651504 master-0 kubenswrapper[7784]: W0223 13:01:55.650447 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d878bd_05cd_414e_94c1_a3e9ce637331.slice/crio-384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073 WatchSource:0}: Error finding container 384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073: Status 404 returned error can't find the container with id 384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073 Feb 23 13:01:56.191971 master-0 kubenswrapper[7784]: I0223 13:01:56.191896 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_92ce0964-5b4d-4df4-88e5-74b5d00b706c/installer/0.log" Feb 23 13:01:56.192648 master-0 kubenswrapper[7784]: I0223 13:01:56.192012 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"92ce0964-5b4d-4df4-88e5-74b5d00b706c","Type":"ContainerDied","Data":"5053e6195512edf0ff85f7cc94b4039190f0c4f7ef5b08a5379248ea121168c5"} Feb 23 13:01:56.192648 master-0 kubenswrapper[7784]: I0223 13:01:56.192070 7784 scope.go:117] "RemoveContainer" containerID="56d5310559a88ed0b40d62cd527190439470a0e322c8924670c0229575bd33cb" Feb 23 13:01:56.192648 master-0 kubenswrapper[7784]: I0223 13:01:56.192180 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:56.197275 master-0 kubenswrapper[7784]: I0223 13:01:56.197215 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" event={"ID":"24d878bd-05cd-414e-94c1-a3e9ce637331","Type":"ContainerStarted","Data":"31acf0de4b73cbfff55422610e960c624d806171dcec6aaeddd658a636224147"} Feb 23 13:01:56.197275 master-0 kubenswrapper[7784]: I0223 13:01:56.197274 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" event={"ID":"24d878bd-05cd-414e-94c1-a3e9ce637331","Type":"ContainerStarted","Data":"384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073"} Feb 23 13:01:56.231007 master-0 kubenswrapper[7784]: I0223 13:01:56.230942 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:56.233732 master-0 kubenswrapper[7784]: I0223 13:01:56.233667 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:56.254577 master-0 kubenswrapper[7784]: I0223 13:01:56.254228 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" podStartSLOduration=1.254202561 podStartE2EDuration="1.254202561s" podCreationTimestamp="2026-02-23 13:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:56.250209931 +0000 UTC m=+58.985063574" watchObservedRunningTime="2026-02-23 13:01:56.254202561 +0000 UTC m=+58.989056194" Feb 23 13:01:56.943421 master-0 kubenswrapper[7784]: I0223 13:01:56.943358 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ljphn" Feb 23 13:01:57.528073 master-0 kubenswrapper[7784]: I0223 13:01:57.527965 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" path="/var/lib/kubelet/pods/92ce0964-5b4d-4df4-88e5-74b5d00b706c/volumes" Feb 23 13:01:57.544994 master-0 kubenswrapper[7784]: I0223 13:01:57.544919 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:01:57.545191 master-0 kubenswrapper[7784]: E0223 13:01:57.545152 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" containerName="installer" Feb 23 13:01:57.545191 master-0 kubenswrapper[7784]: I0223 13:01:57.545174 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" containerName="installer" Feb 23 13:01:57.545314 master-0 kubenswrapper[7784]: I0223 13:01:57.545294 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ce0964-5b4d-4df4-88e5-74b5d00b706c" containerName="installer" Feb 23 13:01:57.545725 master-0 kubenswrapper[7784]: I0223 13:01:57.545691 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.598011 master-0 kubenswrapper[7784]: I0223 13:01:57.597938 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:01:57.649545 master-0 kubenswrapper[7784]: I0223 13:01:57.649161 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.649545 master-0 kubenswrapper[7784]: I0223 13:01:57.649278 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.649545 master-0 kubenswrapper[7784]: I0223 13:01:57.649330 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.753089 master-0 kubenswrapper[7784]: I0223 13:01:57.752947 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.753089 master-0 kubenswrapper[7784]: I0223 13:01:57.753021 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.753089 master-0 kubenswrapper[7784]: I0223 13:01:57.753059 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.753411 master-0 kubenswrapper[7784]: I0223 13:01:57.753162 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.753411 master-0 kubenswrapper[7784]: I0223 13:01:57.753217 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.776493 master-0 kubenswrapper[7784]: I0223 13:01:57.774823 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:57.885839 master-0 kubenswrapper[7784]: I0223 13:01:57.885743 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:01:58.375696 master-0 kubenswrapper[7784]: I0223 13:01:58.375519 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:01:59.191800 master-0 kubenswrapper[7784]: I0223 13:01:59.191649 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:01:59.192381 master-0 kubenswrapper[7784]: I0223 13:01:59.191966 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" podUID="9b696cf8-3a0d-4b21-b620-308d0a11952a" containerName="controller-manager" containerID="cri-o://a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833" gracePeriod=30 Feb 23 13:01:59.216867 master-0 kubenswrapper[7784]: I0223 13:01:59.216790 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f31b1ec1-c140-46ff-8021-2a6f09b71647","Type":"ContainerStarted","Data":"ac3ded0a44d7bae55f6257d5fe584453361f41510d53c1bfa09558969fdc0c87"} Feb 23 13:01:59.216867 master-0 kubenswrapper[7784]: I0223 13:01:59.216857 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f31b1ec1-c140-46ff-8021-2a6f09b71647","Type":"ContainerStarted","Data":"f433a657bcc792ba408750676406b8e436df293362e80d87b9d5f70d165a6cbd"} Feb 23 13:01:59.238811 master-0 kubenswrapper[7784]: I0223 13:01:59.238734 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:01:59.239139 master-0 kubenswrapper[7784]: I0223 13:01:59.239065 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" podUID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" containerName="route-controller-manager" containerID="cri-o://684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d" gracePeriod=30 Feb 23 13:01:59.243972 master-0 kubenswrapper[7784]: I0223 13:01:59.243880 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.243855459 podStartE2EDuration="2.243855459s" podCreationTimestamp="2026-02-23 13:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:59.243196663 +0000 UTC m=+61.978050346" watchObservedRunningTime="2026-02-23 13:01:59.243855459 +0000 UTC m=+61.978709102" Feb 23 13:01:59.770814 master-0 kubenswrapper[7784]: I0223 13:01:59.770761 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:01:59.775115 master-0 kubenswrapper[7784]: I0223 13:01:59.775069 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:01:59.783416 master-0 kubenswrapper[7784]: I0223 13:01:59.783377 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpb8j\" (UniqueName: \"kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j\") pod \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " Feb 23 13:01:59.783486 master-0 kubenswrapper[7784]: I0223 13:01:59.783430 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert\") pod \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " Feb 23 13:01:59.783525 master-0 kubenswrapper[7784]: I0223 13:01:59.783489 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config\") pod \"9b696cf8-3a0d-4b21-b620-308d0a11952a\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " Feb 23 13:01:59.783525 master-0 kubenswrapper[7784]: I0223 13:01:59.783516 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert\") pod \"9b696cf8-3a0d-4b21-b620-308d0a11952a\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " Feb 23 13:01:59.783611 master-0 kubenswrapper[7784]: I0223 13:01:59.783593 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config\") pod \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " Feb 23 13:01:59.783646 master-0 kubenswrapper[7784]: I0223 13:01:59.783627 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles\") pod \"9b696cf8-3a0d-4b21-b620-308d0a11952a\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " Feb 23 13:01:59.783826 master-0 kubenswrapper[7784]: I0223 13:01:59.783773 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca\") pod \"9b696cf8-3a0d-4b21-b620-308d0a11952a\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " Feb 23 13:01:59.783923 master-0 kubenswrapper[7784]: I0223 13:01:59.783890 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2l9t\" (UniqueName: \"kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t\") pod \"9b696cf8-3a0d-4b21-b620-308d0a11952a\" (UID: \"9b696cf8-3a0d-4b21-b620-308d0a11952a\") " Feb 23 13:01:59.783960 master-0 kubenswrapper[7784]: I0223 13:01:59.783940 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca\") pod \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\" (UID: \"f3fcd99e-272e-4877-a2ea-9492ad7f9689\") " Feb 23 13:01:59.784282 master-0 kubenswrapper[7784]: I0223 13:01:59.784227 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9b696cf8-3a0d-4b21-b620-308d0a11952a" (UID: "9b696cf8-3a0d-4b21-b620-308d0a11952a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:59.784282 master-0 kubenswrapper[7784]: I0223 13:01:59.784266 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config" (OuterVolumeSpecName: "config") pod "f3fcd99e-272e-4877-a2ea-9492ad7f9689" (UID: "f3fcd99e-272e-4877-a2ea-9492ad7f9689"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:59.784495 master-0 kubenswrapper[7784]: I0223 13:01:59.784468 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.784548 master-0 kubenswrapper[7784]: I0223 13:01:59.784492 7784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.784548 master-0 kubenswrapper[7784]: I0223 13:01:59.784502 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9b696cf8-3a0d-4b21-b620-308d0a11952a" (UID: "9b696cf8-3a0d-4b21-b620-308d0a11952a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:59.784633 master-0 kubenswrapper[7784]: I0223 13:01:59.784582 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config" (OuterVolumeSpecName: "config") pod "9b696cf8-3a0d-4b21-b620-308d0a11952a" (UID: "9b696cf8-3a0d-4b21-b620-308d0a11952a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:59.784671 master-0 kubenswrapper[7784]: I0223 13:01:59.784633 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3fcd99e-272e-4877-a2ea-9492ad7f9689" (UID: "f3fcd99e-272e-4877-a2ea-9492ad7f9689"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:59.787135 master-0 kubenswrapper[7784]: I0223 13:01:59.787069 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j" (OuterVolumeSpecName: "kube-api-access-xpb8j") pod "f3fcd99e-272e-4877-a2ea-9492ad7f9689" (UID: "f3fcd99e-272e-4877-a2ea-9492ad7f9689"). InnerVolumeSpecName "kube-api-access-xpb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:59.788017 master-0 kubenswrapper[7784]: I0223 13:01:59.787978 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9b696cf8-3a0d-4b21-b620-308d0a11952a" (UID: "9b696cf8-3a0d-4b21-b620-308d0a11952a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:59.788286 master-0 kubenswrapper[7784]: I0223 13:01:59.788244 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t" (OuterVolumeSpecName: "kube-api-access-r2l9t") pod "9b696cf8-3a0d-4b21-b620-308d0a11952a" (UID: "9b696cf8-3a0d-4b21-b620-308d0a11952a"). InnerVolumeSpecName "kube-api-access-r2l9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:59.788398 master-0 kubenswrapper[7784]: I0223 13:01:59.788356 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3fcd99e-272e-4877-a2ea-9492ad7f9689" (UID: "f3fcd99e-272e-4877-a2ea-9492ad7f9689"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885496 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fcd99e-272e-4877-a2ea-9492ad7f9689-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885543 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885556 7784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b696cf8-3a0d-4b21-b620-308d0a11952a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885565 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b696cf8-3a0d-4b21-b620-308d0a11952a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885576 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2l9t\" (UniqueName: \"kubernetes.io/projected/9b696cf8-3a0d-4b21-b620-308d0a11952a-kube-api-access-r2l9t\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885590 7784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3fcd99e-272e-4877-a2ea-9492ad7f9689-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:59.887384 master-0 kubenswrapper[7784]: I0223 13:01:59.885603 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpb8j\" (UniqueName: \"kubernetes.io/projected/f3fcd99e-272e-4877-a2ea-9492ad7f9689-kube-api-access-xpb8j\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:00.224635 master-0 kubenswrapper[7784]: I0223 13:02:00.224568 7784 generic.go:334] "Generic (PLEG): container finished" podID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" containerID="684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d" exitCode=0 Feb 23 13:02:00.225252 master-0 kubenswrapper[7784]: I0223 13:02:00.224631 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" event={"ID":"f3fcd99e-272e-4877-a2ea-9492ad7f9689","Type":"ContainerDied","Data":"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d"} Feb 23 13:02:00.225252 master-0 kubenswrapper[7784]: I0223 13:02:00.224676 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" Feb 23 13:02:00.225252 master-0 kubenswrapper[7784]: I0223 13:02:00.224713 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57" event={"ID":"f3fcd99e-272e-4877-a2ea-9492ad7f9689","Type":"ContainerDied","Data":"566f21c578e2700740d08fec58d84fd003c6e58e1135212cf29c1610b5504891"} Feb 23 13:02:00.225252 master-0 kubenswrapper[7784]: I0223 13:02:00.224744 7784 scope.go:117] "RemoveContainer" containerID="684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d" Feb 23 13:02:00.227281 master-0 kubenswrapper[7784]: I0223 13:02:00.227234 7784 generic.go:334] "Generic (PLEG): container finished" podID="9b696cf8-3a0d-4b21-b620-308d0a11952a" containerID="a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833" exitCode=0 Feb 23 13:02:00.227353 master-0 kubenswrapper[7784]: I0223 13:02:00.227288 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" event={"ID":"9b696cf8-3a0d-4b21-b620-308d0a11952a","Type":"ContainerDied","Data":"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833"} Feb 23 13:02:00.227395 master-0 kubenswrapper[7784]: I0223 13:02:00.227299 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" Feb 23 13:02:00.227460 master-0 kubenswrapper[7784]: I0223 13:02:00.227367 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5594d7855b-rgfnb" event={"ID":"9b696cf8-3a0d-4b21-b620-308d0a11952a","Type":"ContainerDied","Data":"5d6414c4ac09cc2d75e74f71aa67bd2beb985dfb94824f8a48bcacdac5efd244"} Feb 23 13:02:00.237650 master-0 kubenswrapper[7784]: I0223 13:02:00.237592 7784 scope.go:117] "RemoveContainer" containerID="684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d" Feb 23 13:02:00.238279 master-0 kubenswrapper[7784]: E0223 13:02:00.238216 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d\": container with ID starting with 684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d not found: ID does not exist" containerID="684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d" Feb 23 13:02:00.238359 master-0 kubenswrapper[7784]: I0223 13:02:00.238289 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d"} err="failed to get container status \"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d\": rpc error: code = NotFound desc = could not find container \"684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d\": container with ID starting with 684504df1c67a28953557b85bab4763b13914b1f5188f92851ee65b6d7b7dd6d not found: ID does not exist" Feb 23 13:02:00.238420 master-0 kubenswrapper[7784]: I0223 13:02:00.238368 7784 scope.go:117] "RemoveContainer" containerID="a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833" Feb 23 13:02:00.263295 master-0 kubenswrapper[7784]: I0223 13:02:00.263251 7784 scope.go:117] "RemoveContainer" containerID="a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833" Feb 23 13:02:00.264646 master-0 kubenswrapper[7784]: E0223 13:02:00.264570 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833\": container with ID starting with a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833 not found: ID does not exist" containerID="a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833" Feb 23 13:02:00.264751 master-0 kubenswrapper[7784]: I0223 13:02:00.264654 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833"} err="failed to get container status \"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833\": rpc error: code = NotFound desc = could not find container \"a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833\": container with ID starting with a5c167cf507faaa5910070b158b7bf7db01b18529d2de4e33de2199fcf36c833 not found: ID does not exist" Feb 23 13:02:00.277006 master-0 kubenswrapper[7784]: I0223 13:02:00.276910 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:02:00.282506 master-0 kubenswrapper[7784]: I0223 13:02:00.282445 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ddc78cd76-b9j57"] Feb 23 13:02:00.295673 master-0 kubenswrapper[7784]: I0223 13:02:00.295611 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:02:00.301798 master-0 kubenswrapper[7784]: I0223 13:02:00.301747 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5594d7855b-rgfnb"] Feb 23 13:02:00.471364 master-0 kubenswrapper[7784]: I0223 13:02:00.471281 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:02:00.471851 master-0 kubenswrapper[7784]: I0223 13:02:00.471597 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" containerName="installer" containerID="cri-o://0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc" gracePeriod=30 Feb 23 13:02:00.683535 master-0 kubenswrapper[7784]: I0223 13:02:00.683457 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:02:00.683846 master-0 kubenswrapper[7784]: E0223 13:02:00.683771 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b696cf8-3a0d-4b21-b620-308d0a11952a" containerName="controller-manager" Feb 23 13:02:00.683846 master-0 kubenswrapper[7784]: I0223 13:02:00.683788 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b696cf8-3a0d-4b21-b620-308d0a11952a" containerName="controller-manager" Feb 23 13:02:00.683846 master-0 kubenswrapper[7784]: E0223 13:02:00.683804 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" containerName="route-controller-manager" Feb 23 13:02:00.683846 master-0 kubenswrapper[7784]: I0223 13:02:00.683812 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" containerName="route-controller-manager" Feb 23 13:02:00.684098 master-0 kubenswrapper[7784]: I0223 13:02:00.683899 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b696cf8-3a0d-4b21-b620-308d0a11952a" containerName="controller-manager" Feb 23 13:02:00.684098 master-0 kubenswrapper[7784]: I0223 13:02:00.683918 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" containerName="route-controller-manager" Feb 23 13:02:00.684548 master-0 kubenswrapper[7784]: I0223 13:02:00.684499 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.685612 master-0 kubenswrapper[7784]: I0223 13:02:00.685569 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:02:00.686085 master-0 kubenswrapper[7784]: I0223 13:02:00.686053 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.690280 master-0 kubenswrapper[7784]: I0223 13:02:00.690161 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:00.690455 master-0 kubenswrapper[7784]: I0223 13:02:00.690372 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9ppv8" Feb 23 13:02:00.690527 master-0 kubenswrapper[7784]: I0223 13:02:00.690442 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:00.690527 master-0 kubenswrapper[7784]: I0223 13:02:00.690449 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:02:00.690527 master-0 kubenswrapper[7784]: I0223 13:02:00.690521 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:00.690698 master-0 kubenswrapper[7784]: I0223 13:02:00.690475 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:02:00.691161 master-0 kubenswrapper[7784]: I0223 13:02:00.691108 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:00.691280 master-0 kubenswrapper[7784]: I0223 13:02:00.691173 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:02:00.692650 master-0 kubenswrapper[7784]: I0223 13:02:00.692222 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:02:00.692936 master-0 kubenswrapper[7784]: I0223 13:02:00.692894 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:02:00.693219 master-0 kubenswrapper[7784]: I0223 13:02:00.693170 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:02:00.702157 master-0 kubenswrapper[7784]: I0223 13:02:00.701997 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.702326 master-0 kubenswrapper[7784]: I0223 13:02:00.702183 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.702326 master-0 kubenswrapper[7784]: I0223 13:02:00.702233 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.702326 master-0 kubenswrapper[7784]: I0223 13:02:00.702303 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.702507 master-0 kubenswrapper[7784]: I0223 13:02:00.702413 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.702507 master-0 kubenswrapper[7784]: I0223 13:02:00.702449 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.702507 master-0 kubenswrapper[7784]: I0223 13:02:00.702491 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.702626 master-0 kubenswrapper[7784]: I0223 13:02:00.702529 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.702626 master-0 kubenswrapper[7784]: I0223 13:02:00.702567 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.706269 master-0 kubenswrapper[7784]: I0223 13:02:00.706213 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:02:00.708170 master-0 kubenswrapper[7784]: I0223 13:02:00.708121 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:02:00.711508 master-0 kubenswrapper[7784]: I0223 13:02:00.711451 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:02:00.805462 master-0 kubenswrapper[7784]: I0223 13:02:00.805386 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.805462 master-0 kubenswrapper[7784]: I0223 13:02:00.805452 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.806032 master-0 kubenswrapper[7784]: I0223 13:02:00.805977 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.806347 master-0 kubenswrapper[7784]: I0223 13:02:00.806277 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.806411 master-0 kubenswrapper[7784]: I0223 13:02:00.806385 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.806465 master-0 kubenswrapper[7784]: I0223 13:02:00.806446 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.806581 master-0 kubenswrapper[7784]: I0223 13:02:00.806537 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.806642 master-0 kubenswrapper[7784]: I0223 13:02:00.806617 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.806750 master-0 kubenswrapper[7784]: I0223 13:02:00.806717 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.808524 master-0 kubenswrapper[7784]: I0223 13:02:00.808471 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.808932 master-0 kubenswrapper[7784]: I0223 13:02:00.808852 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.809127 master-0 kubenswrapper[7784]: I0223 13:02:00.809086 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.809297 master-0 kubenswrapper[7784]: I0223 13:02:00.809236 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.811240 master-0 kubenswrapper[7784]: I0223 13:02:00.811187 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.811889 master-0 kubenswrapper[7784]: I0223 13:02:00.811848 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:00.813520 master-0 kubenswrapper[7784]: I0223 13:02:00.813485 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.829438 master-0 kubenswrapper[7784]: I0223 13:02:00.829376 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:00.835144 master-0 kubenswrapper[7784]: I0223 13:02:00.835078 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:01.007849 master-0 kubenswrapper[7784]: I0223 13:02:01.007785 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:01.019430 master-0 kubenswrapper[7784]: I0223 13:02:01.019319 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:01.436666 master-0 kubenswrapper[7784]: I0223 13:02:01.436440 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:02:01.447376 master-0 kubenswrapper[7784]: W0223 13:02:01.446745 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf47fa225_93fd_458b_b450_a0411e629afd.slice/crio-655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46 WatchSource:0}: Error finding container 655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46: Status 404 returned error can't find the container with id 655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46 Feb 23 13:02:01.500150 master-0 kubenswrapper[7784]: I0223 13:02:01.500100 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:02:01.523504 master-0 kubenswrapper[7784]: I0223 13:02:01.523457 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b696cf8-3a0d-4b21-b620-308d0a11952a" path="/var/lib/kubelet/pods/9b696cf8-3a0d-4b21-b620-308d0a11952a/volumes" Feb 23 13:02:01.524437 master-0 kubenswrapper[7784]: I0223 13:02:01.524407 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fcd99e-272e-4877-a2ea-9492ad7f9689" path="/var/lib/kubelet/pods/f3fcd99e-272e-4877-a2ea-9492ad7f9689/volumes" Feb 23 13:02:01.528959 master-0 kubenswrapper[7784]: W0223 13:02:01.528915 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c61886_6cc7_44aa_b56a_81cdcc670993.slice/crio-b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3 WatchSource:0}: Error finding container b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3: Status 404 returned error can't find the container with id b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3 Feb 23 13:02:02.243424 master-0 kubenswrapper[7784]: I0223 13:02:02.243125 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerStarted","Data":"12929995bc4c469f6a1c977c1403bda9305b2b652d95308c622e2a38faae5fab"} Feb 23 13:02:02.243424 master-0 kubenswrapper[7784]: I0223 13:02:02.243231 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:02.243424 master-0 kubenswrapper[7784]: I0223 13:02:02.243258 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerStarted","Data":"b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3"} Feb 23 13:02:02.245266 master-0 kubenswrapper[7784]: I0223 13:02:02.245132 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerStarted","Data":"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b"} Feb 23 13:02:02.245266 master-0 kubenswrapper[7784]: I0223 13:02:02.245183 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerStarted","Data":"655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46"} Feb 23 13:02:02.245450 master-0 kubenswrapper[7784]: I0223 13:02:02.245405 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:02.249681 master-0 kubenswrapper[7784]: I0223 13:02:02.249634 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:02:02.430746 master-0 kubenswrapper[7784]: I0223 13:02:02.430659 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:02:02.430995 master-0 kubenswrapper[7784]: I0223 13:02:02.430928 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:02:02.431032 master-0 kubenswrapper[7784]: I0223 13:02:02.431012 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:02:02.431105 master-0 kubenswrapper[7784]: E0223 13:02:02.431074 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:02:02.431165 master-0 kubenswrapper[7784]: E0223 13:02:02.431147 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:03:06.431125044 +0000 UTC m=+129.165978677 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:02:02.434117 master-0 kubenswrapper[7784]: I0223 13:02:02.434085 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:02:02.434695 master-0 kubenswrapper[7784]: I0223 13:02:02.434664 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-rz2zl\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:02:02.582470 master-0 kubenswrapper[7784]: I0223 13:02:02.582379 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podStartSLOduration=3.5823548389999997 podStartE2EDuration="3.582354839s" podCreationTimestamp="2026-02-23 13:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:02.581482988 +0000 UTC m=+65.316336631" watchObservedRunningTime="2026-02-23 13:02:02.582354839 +0000 UTC m=+65.317208492" Feb 23 13:02:02.590493 master-0 kubenswrapper[7784]: I0223 13:02:02.590438 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:02:02.655463 master-0 kubenswrapper[7784]: I0223 13:02:02.653009 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" podStartSLOduration=3.652991188 podStartE2EDuration="3.652991188s" podCreationTimestamp="2026-02-23 13:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:02.65145745 +0000 UTC m=+65.386311113" watchObservedRunningTime="2026-02-23 13:02:02.652991188 +0000 UTC m=+65.387844831" Feb 23 13:02:02.667855 master-0 kubenswrapper[7784]: I0223 13:02:02.667798 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:02:02.668023 master-0 kubenswrapper[7784]: I0223 13:02:02.667854 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:02:02.870103 master-0 kubenswrapper[7784]: I0223 13:02:02.869967 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:02.871527 master-0 kubenswrapper[7784]: I0223 13:02:02.870617 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:02.882964 master-0 kubenswrapper[7784]: I0223 13:02:02.882789 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:02.941641 master-0 kubenswrapper[7784]: I0223 13:02:02.939198 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:02.941641 master-0 kubenswrapper[7784]: I0223 13:02:02.939267 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:02.941641 master-0 kubenswrapper[7784]: I0223 13:02:02.939371 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.040833 master-0 kubenswrapper[7784]: I0223 13:02:03.040762 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.040833 master-0 kubenswrapper[7784]: I0223 13:02:03.040834 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.041207 master-0 kubenswrapper[7784]: I0223 13:02:03.040944 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.041207 master-0 kubenswrapper[7784]: I0223 13:02:03.041063 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.041207 master-0 kubenswrapper[7784]: I0223 13:02:03.041201 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.059455 master-0 kubenswrapper[7784]: I0223 13:02:03.057742 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.159445 master-0 kubenswrapper[7784]: I0223 13:02:03.159024 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:02:03.164576 master-0 kubenswrapper[7784]: W0223 13:02:03.164520 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0122c7_1407_4a35_afcc_2c6b1225e830.slice/crio-f7761301fa084a7c8ad92e580706956001fc2e87ea644c3846fc5f707957b8a8 WatchSource:0}: Error finding container f7761301fa084a7c8ad92e580706956001fc2e87ea644c3846fc5f707957b8a8: Status 404 returned error can't find the container with id f7761301fa084a7c8ad92e580706956001fc2e87ea644c3846fc5f707957b8a8 Feb 23 13:02:03.196631 master-0 kubenswrapper[7784]: I0223 13:02:03.196400 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:03.252705 master-0 kubenswrapper[7784]: I0223 13:02:03.252610 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbrcr"] Feb 23 13:02:03.262781 master-0 kubenswrapper[7784]: I0223 13:02:03.260660 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerStarted","Data":"f7761301fa084a7c8ad92e580706956001fc2e87ea644c3846fc5f707957b8a8"} Feb 23 13:02:03.263683 master-0 kubenswrapper[7784]: W0223 13:02:03.263556 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode941c759_ab95_4b30_a571_6c132ab0e639.slice/crio-cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27 WatchSource:0}: Error finding container cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27: Status 404 returned error can't find the container with id cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27 Feb 23 13:02:03.620980 master-0 kubenswrapper[7784]: I0223 13:02:03.620901 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:04.280418 master-0 kubenswrapper[7784]: I0223 13:02:04.279796 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbrcr" event={"ID":"e941c759-ab95-4b30-a571-6c132ab0e639","Type":"ContainerStarted","Data":"cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27"} Feb 23 13:02:04.283469 master-0 kubenswrapper[7784]: I0223 13:02:04.283365 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4bad4fd9-074b-4a4e-8af9-50bdc4be09df","Type":"ContainerStarted","Data":"25f5f27094e3f4980d0a60ff68ca18311759b6f56fac8ff763cd8f4150a673af"} Feb 23 13:02:04.283469 master-0 kubenswrapper[7784]: I0223 13:02:04.283447 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4bad4fd9-074b-4a4e-8af9-50bdc4be09df","Type":"ContainerStarted","Data":"89a7eb5dd9ce527b37e8cbbeeed3ebf6bd149269a0623c131d3e7f8e71c4f12f"} Feb 23 13:02:04.620577 master-0 kubenswrapper[7784]: I0223 13:02:04.539258 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.539227295 podStartE2EDuration="2.539227295s" podCreationTimestamp="2026-02-23 13:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:04.304827938 +0000 UTC m=+67.039681581" watchObservedRunningTime="2026-02-23 13:02:04.539227295 +0000 UTC m=+67.274080938" Feb 23 13:02:04.620577 master-0 kubenswrapper[7784]: I0223 13:02:04.540257 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:04.620577 master-0 kubenswrapper[7784]: I0223 13:02:04.540511 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="f31b1ec1-c140-46ff-8021-2a6f09b71647" containerName="installer" containerID="cri-o://ac3ded0a44d7bae55f6257d5fe584453361f41510d53c1bfa09558969fdc0c87" gracePeriod=30 Feb 23 13:02:05.293318 master-0 kubenswrapper[7784]: I0223 13:02:05.293259 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_f31b1ec1-c140-46ff-8021-2a6f09b71647/installer/0.log" Feb 23 13:02:05.293785 master-0 kubenswrapper[7784]: I0223 13:02:05.293321 7784 generic.go:334] "Generic (PLEG): container finished" podID="f31b1ec1-c140-46ff-8021-2a6f09b71647" containerID="ac3ded0a44d7bae55f6257d5fe584453361f41510d53c1bfa09558969fdc0c87" exitCode=1 Feb 23 13:02:05.294062 master-0 kubenswrapper[7784]: I0223 13:02:05.294023 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f31b1ec1-c140-46ff-8021-2a6f09b71647","Type":"ContainerDied","Data":"ac3ded0a44d7bae55f6257d5fe584453361f41510d53c1bfa09558969fdc0c87"} Feb 23 13:02:05.746806 master-0 kubenswrapper[7784]: I0223 13:02:05.746757 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_f31b1ec1-c140-46ff-8021-2a6f09b71647/installer/0.log" Feb 23 13:02:05.746969 master-0 kubenswrapper[7784]: I0223 13:02:05.746935 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:05.895695 master-0 kubenswrapper[7784]: I0223 13:02:05.895611 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock\") pod \"f31b1ec1-c140-46ff-8021-2a6f09b71647\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " Feb 23 13:02:05.895899 master-0 kubenswrapper[7784]: I0223 13:02:05.895751 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir\") pod \"f31b1ec1-c140-46ff-8021-2a6f09b71647\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " Feb 23 13:02:05.895899 master-0 kubenswrapper[7784]: I0223 13:02:05.895857 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access\") pod \"f31b1ec1-c140-46ff-8021-2a6f09b71647\" (UID: \"f31b1ec1-c140-46ff-8021-2a6f09b71647\") " Feb 23 13:02:05.896887 master-0 kubenswrapper[7784]: I0223 13:02:05.896849 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f31b1ec1-c140-46ff-8021-2a6f09b71647" (UID: "f31b1ec1-c140-46ff-8021-2a6f09b71647"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:05.896980 master-0 kubenswrapper[7784]: I0223 13:02:05.896914 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock" (OuterVolumeSpecName: "var-lock") pod "f31b1ec1-c140-46ff-8021-2a6f09b71647" (UID: "f31b1ec1-c140-46ff-8021-2a6f09b71647"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:05.900828 master-0 kubenswrapper[7784]: I0223 13:02:05.900789 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f31b1ec1-c140-46ff-8021-2a6f09b71647" (UID: "f31b1ec1-c140-46ff-8021-2a6f09b71647"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:05.997774 master-0 kubenswrapper[7784]: I0223 13:02:05.997720 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:05.997774 master-0 kubenswrapper[7784]: I0223 13:02:05.997760 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f31b1ec1-c140-46ff-8021-2a6f09b71647-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:05.997774 master-0 kubenswrapper[7784]: I0223 13:02:05.997774 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f31b1ec1-c140-46ff-8021-2a6f09b71647-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:06.305529 master-0 kubenswrapper[7784]: I0223 13:02:06.305490 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_f31b1ec1-c140-46ff-8021-2a6f09b71647/installer/0.log" Feb 23 13:02:06.305802 master-0 kubenswrapper[7784]: I0223 13:02:06.305593 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f31b1ec1-c140-46ff-8021-2a6f09b71647","Type":"ContainerDied","Data":"f433a657bcc792ba408750676406b8e436df293362e80d87b9d5f70d165a6cbd"} Feb 23 13:02:06.305802 master-0 kubenswrapper[7784]: I0223 13:02:06.305685 7784 scope.go:117] "RemoveContainer" containerID="ac3ded0a44d7bae55f6257d5fe584453361f41510d53c1bfa09558969fdc0c87" Feb 23 13:02:06.305949 master-0 kubenswrapper[7784]: I0223 13:02:06.305916 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:06.313937 master-0 kubenswrapper[7784]: I0223 13:02:06.313878 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerStarted","Data":"e605f56ea553ea41b317a4b82ddae8751a5476ad313ba687cfe0354516e82158"} Feb 23 13:02:06.315558 master-0 kubenswrapper[7784]: I0223 13:02:06.315510 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbrcr" event={"ID":"e941c759-ab95-4b30-a571-6c132ab0e639","Type":"ContainerStarted","Data":"d96a9a14d0b50e4e633cfd1091fb05361842661060e6b1747ace7500db7f1fed"} Feb 23 13:02:06.365928 master-0 kubenswrapper[7784]: I0223 13:02:06.365871 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:06.368823 master-0 kubenswrapper[7784]: I0223 13:02:06.368775 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:06.542478 master-0 kubenswrapper[7784]: I0223 13:02:06.540757 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:06.542478 master-0 kubenswrapper[7784]: E0223 13:02:06.540966 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31b1ec1-c140-46ff-8021-2a6f09b71647" containerName="installer" Feb 23 13:02:06.542478 master-0 kubenswrapper[7784]: I0223 13:02:06.540980 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31b1ec1-c140-46ff-8021-2a6f09b71647" containerName="installer" Feb 23 13:02:06.542478 master-0 kubenswrapper[7784]: I0223 13:02:06.541064 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31b1ec1-c140-46ff-8021-2a6f09b71647" containerName="installer" Feb 23 13:02:06.542478 master-0 kubenswrapper[7784]: I0223 13:02:06.541439 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.548905 master-0 kubenswrapper[7784]: I0223 13:02:06.548836 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-654sf" Feb 23 13:02:06.555652 master-0 kubenswrapper[7784]: I0223 13:02:06.553203 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:06.608269 master-0 kubenswrapper[7784]: I0223 13:02:06.608038 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.608269 master-0 kubenswrapper[7784]: I0223 13:02:06.608134 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.608269 master-0 kubenswrapper[7784]: I0223 13:02:06.608165 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.710020 master-0 kubenswrapper[7784]: I0223 13:02:06.709607 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.710020 master-0 kubenswrapper[7784]: I0223 13:02:06.709767 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.710020 master-0 kubenswrapper[7784]: I0223 13:02:06.709824 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.710511 master-0 kubenswrapper[7784]: I0223 13:02:06.710132 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.710511 master-0 kubenswrapper[7784]: I0223 13:02:06.710280 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.742556 master-0 kubenswrapper[7784]: I0223 13:02:06.742462 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access\") pod \"installer-4-master-0\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.866524 master-0 kubenswrapper[7784]: I0223 13:02:06.866321 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:07.323133 master-0 kubenswrapper[7784]: I0223 13:02:07.322512 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbrcr" event={"ID":"e941c759-ab95-4b30-a571-6c132ab0e639","Type":"ContainerStarted","Data":"5594957a938cfecb3f237714f65c598e0f3ae169da84f5f321c93da2a014807e"} Feb 23 13:02:07.323133 master-0 kubenswrapper[7784]: I0223 13:02:07.323058 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:07.326548 master-0 kubenswrapper[7784]: I0223 13:02:07.326502 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerStarted","Data":"1a566738f1d6ae1fbd631fc2a674cd4e994eb7c4949566fcd48562d5dff33cf2"} Feb 23 13:02:07.540852 master-0 kubenswrapper[7784]: I0223 13:02:07.540705 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31b1ec1-c140-46ff-8021-2a6f09b71647" path="/var/lib/kubelet/pods/f31b1ec1-c140-46ff-8021-2a6f09b71647/volumes" Feb 23 13:02:08.334475 master-0 kubenswrapper[7784]: I0223 13:02:08.334393 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"283fd2f4-771b-4592-a143-b7e3a5ed6765","Type":"ContainerStarted","Data":"698f0709a0bf6365bf7afb4765b93fe2fefc787772f82b5103295a5f25bae796"} Feb 23 13:02:08.334475 master-0 kubenswrapper[7784]: I0223 13:02:08.334479 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"283fd2f4-771b-4592-a143-b7e3a5ed6765","Type":"ContainerStarted","Data":"cbee83a28e4e85b2d4891dc24855eb2cc6165c6448a3273aa5f8a3ec8e2cf444"} Feb 23 13:02:09.523917 master-0 kubenswrapper[7784]: I0223 13:02:09.523800 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.523772701 podStartE2EDuration="3.523772701s" podCreationTimestamp="2026-02-23 13:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:08.753898809 +0000 UTC m=+71.488752512" watchObservedRunningTime="2026-02-23 13:02:09.523772701 +0000 UTC m=+72.258626354" Feb 23 13:02:09.526124 master-0 kubenswrapper[7784]: I0223 13:02:09.526076 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm"] Feb 23 13:02:09.527111 master-0 kubenswrapper[7784]: I0223 13:02:09.527057 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.530197 master-0 kubenswrapper[7784]: I0223 13:02:09.530129 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:02:09.530650 master-0 kubenswrapper[7784]: I0223 13:02:09.530604 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:02:09.530867 master-0 kubenswrapper[7784]: I0223 13:02:09.530832 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxgbj" Feb 23 13:02:09.531173 master-0 kubenswrapper[7784]: I0223 13:02:09.531138 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:02:09.552797 master-0 kubenswrapper[7784]: I0223 13:02:09.552740 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm"] Feb 23 13:02:09.668795 master-0 kubenswrapper[7784]: I0223 13:02:09.668705 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.669248 master-0 kubenswrapper[7784]: I0223 13:02:09.668835 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvg7b\" (UniqueName: \"kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.770369 master-0 kubenswrapper[7784]: I0223 13:02:09.770274 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.770587 master-0 kubenswrapper[7784]: I0223 13:02:09.770413 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvg7b\" (UniqueName: \"kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.776044 master-0 kubenswrapper[7784]: I0223 13:02:09.775955 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.813899 master-0 kubenswrapper[7784]: I0223 13:02:09.811948 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvg7b\" (UniqueName: \"kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:09.856802 master-0 kubenswrapper[7784]: I0223 13:02:09.856721 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:02:10.292515 master-0 kubenswrapper[7784]: I0223 13:02:10.292424 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm"] Feb 23 13:02:10.307999 master-0 kubenswrapper[7784]: W0223 13:02:10.307892 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5802841_52dc_4d15_a252_0eac70e9fbbc.slice/crio-cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574 WatchSource:0}: Error finding container cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574: Status 404 returned error can't find the container with id cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574 Feb 23 13:02:10.352148 master-0 kubenswrapper[7784]: I0223 13:02:10.352039 7784 generic.go:334] "Generic (PLEG): container finished" podID="d9b02d3c-f671-4850-8c6e-315044a1376c" containerID="f9f2d3833534ce883ca50eb44438eaa5f1540dd7900a3929b7c7f66a4a78289a" exitCode=0 Feb 23 13:02:10.352410 master-0 kubenswrapper[7784]: I0223 13:02:10.352126 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerDied","Data":"f9f2d3833534ce883ca50eb44438eaa5f1540dd7900a3929b7c7f66a4a78289a"} Feb 23 13:02:10.353271 master-0 kubenswrapper[7784]: I0223 13:02:10.353211 7784 scope.go:117] "RemoveContainer" containerID="f9f2d3833534ce883ca50eb44438eaa5f1540dd7900a3929b7c7f66a4a78289a" Feb 23 13:02:10.355397 master-0 kubenswrapper[7784]: I0223 13:02:10.353949 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" event={"ID":"e5802841-52dc-4d15-a252-0eac70e9fbbc","Type":"ContainerStarted","Data":"cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574"} Feb 23 13:02:11.360544 master-0 kubenswrapper[7784]: I0223 13:02:11.360495 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerStarted","Data":"a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a"} Feb 23 13:02:11.645297 master-0 kubenswrapper[7784]: I0223 13:02:11.645229 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:02:11.645618 master-0 kubenswrapper[7784]: I0223 13:02:11.645583 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7" gracePeriod=30 Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.645691 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4" gracePeriod=30 Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.647844 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: E0223 13:02:11.648192 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.648209 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: E0223 13:02:11.648239 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.648247 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.648392 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:11.650324 master-0 kubenswrapper[7784]: I0223 13:02:11.648411 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:11.664216 master-0 kubenswrapper[7784]: I0223 13:02:11.661128 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.822255 master-0 kubenswrapper[7784]: I0223 13:02:11.822149 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.822255 master-0 kubenswrapper[7784]: I0223 13:02:11.822216 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.824112 master-0 kubenswrapper[7784]: I0223 13:02:11.822669 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.824112 master-0 kubenswrapper[7784]: I0223 13:02:11.822773 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.824112 master-0 kubenswrapper[7784]: I0223 13:02:11.823048 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.824112 master-0 kubenswrapper[7784]: I0223 13:02:11.823119 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924499 master-0 kubenswrapper[7784]: I0223 13:02:11.924294 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924499 master-0 kubenswrapper[7784]: I0223 13:02:11.924418 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924800 master-0 kubenswrapper[7784]: I0223 13:02:11.924670 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924800 master-0 kubenswrapper[7784]: I0223 13:02:11.924450 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924800 master-0 kubenswrapper[7784]: I0223 13:02:11.924730 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924800 master-0 kubenswrapper[7784]: I0223 13:02:11.924773 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.924800 master-0 kubenswrapper[7784]: I0223 13:02:11.924773 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.925006 master-0 kubenswrapper[7784]: I0223 13:02:11.924891 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.925006 master-0 kubenswrapper[7784]: I0223 13:02:11.924898 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.925006 master-0 kubenswrapper[7784]: I0223 13:02:11.924990 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.925094 master-0 kubenswrapper[7784]: I0223 13:02:11.925038 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:11.925094 master-0 kubenswrapper[7784]: I0223 13:02:11.925072 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:12.253274 master-0 kubenswrapper[7784]: I0223 13:02:12.252851 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_797b4e06-e895-4ccc-a8f8-9de5d3a6663f/installer/0.log" Feb 23 13:02:12.253274 master-0 kubenswrapper[7784]: I0223 13:02:12.252934 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:02:12.367625 master-0 kubenswrapper[7784]: I0223 13:02:12.367463 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_797b4e06-e895-4ccc-a8f8-9de5d3a6663f/installer/0.log" Feb 23 13:02:12.367625 master-0 kubenswrapper[7784]: I0223 13:02:12.367530 7784 generic.go:334] "Generic (PLEG): container finished" podID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" containerID="7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab" exitCode=1 Feb 23 13:02:12.367625 master-0 kubenswrapper[7784]: I0223 13:02:12.367573 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"797b4e06-e895-4ccc-a8f8-9de5d3a6663f","Type":"ContainerDied","Data":"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab"} Feb 23 13:02:12.367625 master-0 kubenswrapper[7784]: I0223 13:02:12.367613 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"797b4e06-e895-4ccc-a8f8-9de5d3a6663f","Type":"ContainerDied","Data":"f29334ecf8fe0616ba9fdb0115a73378b0ec78d2a311daffbf6cd7a0e8642f2d"} Feb 23 13:02:12.367625 master-0 kubenswrapper[7784]: I0223 13:02:12.367641 7784 scope.go:117] "RemoveContainer" containerID="7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab" Feb 23 13:02:12.377396 master-0 kubenswrapper[7784]: I0223 13:02:12.367671 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:02:12.390387 master-0 kubenswrapper[7784]: I0223 13:02:12.390309 7784 scope.go:117] "RemoveContainer" containerID="7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab" Feb 23 13:02:12.390790 master-0 kubenswrapper[7784]: E0223 13:02:12.390758 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab\": container with ID starting with 7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab not found: ID does not exist" containerID="7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab" Feb 23 13:02:12.390830 master-0 kubenswrapper[7784]: I0223 13:02:12.390790 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab"} err="failed to get container status \"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab\": rpc error: code = NotFound desc = could not find container \"7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab\": container with ID starting with 7e52f3e261c4a92d7553c1eb1cbb5fff3459526fae67719a2a212081ed6476ab not found: ID does not exist" Feb 23 13:02:12.437819 master-0 kubenswrapper[7784]: I0223 13:02:12.437743 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access\") pod \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " Feb 23 13:02:12.438002 master-0 kubenswrapper[7784]: I0223 13:02:12.437848 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir\") pod \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " Feb 23 13:02:12.438002 master-0 kubenswrapper[7784]: I0223 13:02:12.437938 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock\") pod \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\" (UID: \"797b4e06-e895-4ccc-a8f8-9de5d3a6663f\") " Feb 23 13:02:12.438002 master-0 kubenswrapper[7784]: I0223 13:02:12.437967 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "797b4e06-e895-4ccc-a8f8-9de5d3a6663f" (UID: "797b4e06-e895-4ccc-a8f8-9de5d3a6663f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:12.438155 master-0 kubenswrapper[7784]: I0223 13:02:12.438038 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock" (OuterVolumeSpecName: "var-lock") pod "797b4e06-e895-4ccc-a8f8-9de5d3a6663f" (UID: "797b4e06-e895-4ccc-a8f8-9de5d3a6663f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:12.438279 master-0 kubenswrapper[7784]: I0223 13:02:12.438254 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:12.438279 master-0 kubenswrapper[7784]: I0223 13:02:12.438276 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:12.444045 master-0 kubenswrapper[7784]: I0223 13:02:12.443968 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "797b4e06-e895-4ccc-a8f8-9de5d3a6663f" (UID: "797b4e06-e895-4ccc-a8f8-9de5d3a6663f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:12.540021 master-0 kubenswrapper[7784]: I0223 13:02:12.539912 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797b4e06-e895-4ccc-a8f8-9de5d3a6663f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:13.385348 master-0 kubenswrapper[7784]: I0223 13:02:13.385270 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" event={"ID":"e5802841-52dc-4d15-a252-0eac70e9fbbc","Type":"ContainerStarted","Data":"a0b82533ef8a23dd50bebab82c0ca8db95bf68be3db11bf32c9c3702f2b24d95"} Feb 23 13:02:17.996054 master-0 kubenswrapper[7784]: I0223 13:02:17.995978 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_c09724e9-277a-4fb0-a6c2-8f18ecefad60/installer/0.log" Feb 23 13:02:17.996457 master-0 kubenswrapper[7784]: I0223 13:02:17.996119 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:02:18.031600 master-0 kubenswrapper[7784]: I0223 13:02:18.031513 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir\") pod \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " Feb 23 13:02:18.031600 master-0 kubenswrapper[7784]: I0223 13:02:18.031612 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access\") pod \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " Feb 23 13:02:18.031740 master-0 kubenswrapper[7784]: I0223 13:02:18.031669 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock\") pod \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\" (UID: \"c09724e9-277a-4fb0-a6c2-8f18ecefad60\") " Feb 23 13:02:18.031913 master-0 kubenswrapper[7784]: I0223 13:02:18.031839 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c09724e9-277a-4fb0-a6c2-8f18ecefad60" (UID: "c09724e9-277a-4fb0-a6c2-8f18ecefad60"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:18.031913 master-0 kubenswrapper[7784]: I0223 13:02:18.031884 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock" (OuterVolumeSpecName: "var-lock") pod "c09724e9-277a-4fb0-a6c2-8f18ecefad60" (UID: "c09724e9-277a-4fb0-a6c2-8f18ecefad60"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:18.037090 master-0 kubenswrapper[7784]: I0223 13:02:18.037027 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c09724e9-277a-4fb0-a6c2-8f18ecefad60" (UID: "c09724e9-277a-4fb0-a6c2-8f18ecefad60"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:18.133207 master-0 kubenswrapper[7784]: I0223 13:02:18.133095 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:18.133207 master-0 kubenswrapper[7784]: I0223 13:02:18.133165 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:18.133207 master-0 kubenswrapper[7784]: I0223 13:02:18.133188 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c09724e9-277a-4fb0-a6c2-8f18ecefad60-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:18.422033 master-0 kubenswrapper[7784]: I0223 13:02:18.421972 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_c09724e9-277a-4fb0-a6c2-8f18ecefad60/installer/0.log" Feb 23 13:02:18.422313 master-0 kubenswrapper[7784]: I0223 13:02:18.422045 7784 generic.go:334] "Generic (PLEG): container finished" podID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" containerID="0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc" exitCode=1 Feb 23 13:02:18.422313 master-0 kubenswrapper[7784]: I0223 13:02:18.422086 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"c09724e9-277a-4fb0-a6c2-8f18ecefad60","Type":"ContainerDied","Data":"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc"} Feb 23 13:02:18.422313 master-0 kubenswrapper[7784]: I0223 13:02:18.422120 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"c09724e9-277a-4fb0-a6c2-8f18ecefad60","Type":"ContainerDied","Data":"35a818ec3dd4aefac3281c1a7ddf64f96e3547a3060f7fe2a1805808d16db4f1"} Feb 23 13:02:18.422313 master-0 kubenswrapper[7784]: I0223 13:02:18.422142 7784 scope.go:117] "RemoveContainer" containerID="0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc" Feb 23 13:02:18.422313 master-0 kubenswrapper[7784]: I0223 13:02:18.422168 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:02:18.443777 master-0 kubenswrapper[7784]: I0223 13:02:18.443727 7784 scope.go:117] "RemoveContainer" containerID="0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc" Feb 23 13:02:18.444299 master-0 kubenswrapper[7784]: E0223 13:02:18.444257 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc\": container with ID starting with 0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc not found: ID does not exist" containerID="0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc" Feb 23 13:02:18.444391 master-0 kubenswrapper[7784]: I0223 13:02:18.444294 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc"} err="failed to get container status \"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc\": rpc error: code = NotFound desc = could not find container \"0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc\": container with ID starting with 0652f76526c3dcdd052fc880e42b00f6f8669f438ea04a37493e04c78832b1dc not found: ID does not exist" Feb 23 13:02:24.462123 master-0 kubenswrapper[7784]: I0223 13:02:24.462021 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d" exitCode=1 Feb 23 13:02:24.462123 master-0 kubenswrapper[7784]: I0223 13:02:24.462085 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d"} Feb 23 13:02:24.463165 master-0 kubenswrapper[7784]: I0223 13:02:24.462214 7784 scope.go:117] "RemoveContainer" containerID="18dceb7e5c040918c12a2232d059dfb40d6eebb6d7f4618c2280a12d936f7b09" Feb 23 13:02:24.463165 master-0 kubenswrapper[7784]: I0223 13:02:24.463123 7784 scope.go:117] "RemoveContainer" containerID="af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d" Feb 23 13:02:24.742024 master-0 kubenswrapper[7784]: E0223 13:02:24.741946 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:24.742966 master-0 kubenswrapper[7784]: I0223 13:02:24.742917 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:24.768328 master-0 kubenswrapper[7784]: W0223 13:02:24.768244 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-66e2f9dd489fc518008e7b30fdb08bebd37881d539caaf4390c9afa1dff28c3f WatchSource:0}: Error finding container 66e2f9dd489fc518008e7b30fdb08bebd37881d539caaf4390c9afa1dff28c3f: Status 404 returned error can't find the container with id 66e2f9dd489fc518008e7b30fdb08bebd37881d539caaf4390c9afa1dff28c3f Feb 23 13:02:25.471954 master-0 kubenswrapper[7784]: I0223 13:02:25.471858 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae"} Feb 23 13:02:25.474211 master-0 kubenswrapper[7784]: I0223 13:02:25.474131 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="7ccd6c0f7b7169060efbd69b89d31fe78ead24ef11ec518fb0a078ce9f74b4ec" exitCode=0 Feb 23 13:02:25.474309 master-0 kubenswrapper[7784]: I0223 13:02:25.474211 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"7ccd6c0f7b7169060efbd69b89d31fe78ead24ef11ec518fb0a078ce9f74b4ec"} Feb 23 13:02:25.474309 master-0 kubenswrapper[7784]: I0223 13:02:25.474269 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"66e2f9dd489fc518008e7b30fdb08bebd37881d539caaf4390c9afa1dff28c3f"} Feb 23 13:02:26.289260 master-0 kubenswrapper[7784]: I0223 13:02:26.289126 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:02:26.480733 master-0 kubenswrapper[7784]: I0223 13:02:26.480666 7784 generic.go:334] "Generic (PLEG): container finished" podID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerID="dd4d5f4a0ab82fe5e433041fcf11c703ce19588ca738c6da0621782807f531c9" exitCode=0 Feb 23 13:02:26.481499 master-0 kubenswrapper[7784]: I0223 13:02:26.480808 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"29d3a080-c8a3-4359-9442-972bf4bb9b04","Type":"ContainerDied","Data":"dd4d5f4a0ab82fe5e433041fcf11c703ce19588ca738c6da0621782807f531c9"} Feb 23 13:02:27.885722 master-0 kubenswrapper[7784]: I0223 13:02:27.885603 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:02:28.081444 master-0 kubenswrapper[7784]: I0223 13:02:28.081265 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access\") pod \"29d3a080-c8a3-4359-9442-972bf4bb9b04\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " Feb 23 13:02:28.081688 master-0 kubenswrapper[7784]: I0223 13:02:28.081451 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock\") pod \"29d3a080-c8a3-4359-9442-972bf4bb9b04\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " Feb 23 13:02:28.081688 master-0 kubenswrapper[7784]: I0223 13:02:28.081562 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir\") pod \"29d3a080-c8a3-4359-9442-972bf4bb9b04\" (UID: \"29d3a080-c8a3-4359-9442-972bf4bb9b04\") " Feb 23 13:02:28.081688 master-0 kubenswrapper[7784]: I0223 13:02:28.081624 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock" (OuterVolumeSpecName: "var-lock") pod "29d3a080-c8a3-4359-9442-972bf4bb9b04" (UID: "29d3a080-c8a3-4359-9442-972bf4bb9b04"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:28.081863 master-0 kubenswrapper[7784]: I0223 13:02:28.081790 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "29d3a080-c8a3-4359-9442-972bf4bb9b04" (UID: "29d3a080-c8a3-4359-9442-972bf4bb9b04"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:28.082138 master-0 kubenswrapper[7784]: I0223 13:02:28.082092 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:28.082178 master-0 kubenswrapper[7784]: I0223 13:02:28.082139 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29d3a080-c8a3-4359-9442-972bf4bb9b04-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:28.085712 master-0 kubenswrapper[7784]: I0223 13:02:28.085641 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "29d3a080-c8a3-4359-9442-972bf4bb9b04" (UID: "29d3a080-c8a3-4359-9442-972bf4bb9b04"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:28.183030 master-0 kubenswrapper[7784]: I0223 13:02:28.182931 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29d3a080-c8a3-4359-9442-972bf4bb9b04-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:28.498858 master-0 kubenswrapper[7784]: I0223 13:02:28.498820 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/0.log" Feb 23 13:02:28.499144 master-0 kubenswrapper[7784]: I0223 13:02:28.499120 7784 generic.go:334] "Generic (PLEG): container finished" podID="d71885db-c29e-429a-aa1f-1c274796a69f" containerID="664bed9a58a32d7def57d5398a174d1c1950d8f182a5fd20785e403d394c58a2" exitCode=1 Feb 23 13:02:28.499260 master-0 kubenswrapper[7784]: I0223 13:02:28.499195 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerDied","Data":"664bed9a58a32d7def57d5398a174d1c1950d8f182a5fd20785e403d394c58a2"} Feb 23 13:02:28.500189 master-0 kubenswrapper[7784]: I0223 13:02:28.500140 7784 scope.go:117] "RemoveContainer" containerID="664bed9a58a32d7def57d5398a174d1c1950d8f182a5fd20785e403d394c58a2" Feb 23 13:02:28.501745 master-0 kubenswrapper[7784]: I0223 13:02:28.501725 7784 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5" exitCode=1 Feb 23 13:02:28.501924 master-0 kubenswrapper[7784]: I0223 13:02:28.501828 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5"} Feb 23 13:02:28.503018 master-0 kubenswrapper[7784]: I0223 13:02:28.502953 7784 scope.go:117] "RemoveContainer" containerID="f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5" Feb 23 13:02:28.505421 master-0 kubenswrapper[7784]: I0223 13:02:28.505312 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"29d3a080-c8a3-4359-9442-972bf4bb9b04","Type":"ContainerDied","Data":"f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af"} Feb 23 13:02:28.505421 master-0 kubenswrapper[7784]: I0223 13:02:28.505373 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:02:28.505909 master-0 kubenswrapper[7784]: I0223 13:02:28.505375 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af" Feb 23 13:02:29.290218 master-0 kubenswrapper[7784]: I0223 13:02:29.290061 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:29.391102 master-0 kubenswrapper[7784]: E0223 13:02:29.391001 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:29.521982 master-0 kubenswrapper[7784]: I0223 13:02:29.521880 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357"} Feb 23 13:02:29.523841 master-0 kubenswrapper[7784]: I0223 13:02:29.523769 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/0.log" Feb 23 13:02:29.523967 master-0 kubenswrapper[7784]: I0223 13:02:29.523854 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerStarted","Data":"114dcbc6fdab8f4038f5c9cf10d758f8abe2dd1b2791ef3c7f6e715028e0da39"} Feb 23 13:02:29.972220 master-0 kubenswrapper[7784]: E0223 13:02:29.971871 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:19Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:30.478169 master-0 kubenswrapper[7784]: I0223 13:02:30.478031 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:02:31.542494 master-0 kubenswrapper[7784]: I0223 13:02:31.542399 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:02:31.542494 master-0 kubenswrapper[7784]: I0223 13:02:31.542476 7784 generic.go:334] "Generic (PLEG): container finished" podID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerID="886c9563273e1980f3ecc0464372fafeb8a67330b8225928122ba4af2f8bda52" exitCode=1 Feb 23 13:02:31.543498 master-0 kubenswrapper[7784]: I0223 13:02:31.542527 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"54b76471-bb9d-45a1-b3be-53e4f013e604","Type":"ContainerDied","Data":"886c9563273e1980f3ecc0464372fafeb8a67330b8225928122ba4af2f8bda52"} Feb 23 13:02:32.913255 master-0 kubenswrapper[7784]: I0223 13:02:32.913171 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:02:32.914003 master-0 kubenswrapper[7784]: I0223 13:02:32.913286 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:32.959294 master-0 kubenswrapper[7784]: I0223 13:02:32.959178 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock\") pod \"54b76471-bb9d-45a1-b3be-53e4f013e604\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " Feb 23 13:02:32.959294 master-0 kubenswrapper[7784]: I0223 13:02:32.959293 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir\") pod \"54b76471-bb9d-45a1-b3be-53e4f013e604\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " Feb 23 13:02:32.959664 master-0 kubenswrapper[7784]: I0223 13:02:32.959387 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock" (OuterVolumeSpecName: "var-lock") pod "54b76471-bb9d-45a1-b3be-53e4f013e604" (UID: "54b76471-bb9d-45a1-b3be-53e4f013e604"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:32.959664 master-0 kubenswrapper[7784]: I0223 13:02:32.959430 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54b76471-bb9d-45a1-b3be-53e4f013e604" (UID: "54b76471-bb9d-45a1-b3be-53e4f013e604"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:32.959664 master-0 kubenswrapper[7784]: I0223 13:02:32.959556 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access\") pod \"54b76471-bb9d-45a1-b3be-53e4f013e604\" (UID: \"54b76471-bb9d-45a1-b3be-53e4f013e604\") " Feb 23 13:02:32.959958 master-0 kubenswrapper[7784]: I0223 13:02:32.959911 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:32.959958 master-0 kubenswrapper[7784]: I0223 13:02:32.959942 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54b76471-bb9d-45a1-b3be-53e4f013e604-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:32.965026 master-0 kubenswrapper[7784]: I0223 13:02:32.964958 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54b76471-bb9d-45a1-b3be-53e4f013e604" (UID: "54b76471-bb9d-45a1-b3be-53e4f013e604"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:33.062176 master-0 kubenswrapper[7784]: I0223 13:02:33.062051 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54b76471-bb9d-45a1-b3be-53e4f013e604-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:33.560675 master-0 kubenswrapper[7784]: I0223 13:02:33.560580 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:02:33.560999 master-0 kubenswrapper[7784]: I0223 13:02:33.560724 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"54b76471-bb9d-45a1-b3be-53e4f013e604","Type":"ContainerDied","Data":"2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7"} Feb 23 13:02:33.560999 master-0 kubenswrapper[7784]: I0223 13:02:33.560772 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7" Feb 23 13:02:33.560999 master-0 kubenswrapper[7784]: I0223 13:02:33.560852 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:38.482331 master-0 kubenswrapper[7784]: E0223 13:02:38.482231 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:39.290939 master-0 kubenswrapper[7784]: I0223 13:02:39.290815 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:39.392382 master-0 kubenswrapper[7784]: E0223 13:02:39.392216 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:39.605423 master-0 kubenswrapper[7784]: I0223 13:02:39.605143 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="5f2912a7aba95d3d6fab8df1a73bdd941d5cec4d910c0279136faf5966960607" exitCode=0 Feb 23 13:02:39.605423 master-0 kubenswrapper[7784]: I0223 13:02:39.605268 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"5f2912a7aba95d3d6fab8df1a73bdd941d5cec4d910c0279136faf5966960607"} Feb 23 13:02:39.610761 master-0 kubenswrapper[7784]: I0223 13:02:39.610690 7784 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4" exitCode=0 Feb 23 13:02:39.973196 master-0 kubenswrapper[7784]: E0223 13:02:39.972824 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:41.775590 master-0 kubenswrapper[7784]: I0223 13:02:41.775495 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 23 13:02:41.776256 master-0 kubenswrapper[7784]: I0223 13:02:41.775623 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:02:41.895623 master-0 kubenswrapper[7784]: I0223 13:02:41.895484 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 23 13:02:41.895623 master-0 kubenswrapper[7784]: I0223 13:02:41.895616 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 23 13:02:41.896151 master-0 kubenswrapper[7784]: I0223 13:02:41.895714 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:41.896151 master-0 kubenswrapper[7784]: I0223 13:02:41.895738 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:41.896151 master-0 kubenswrapper[7784]: I0223 13:02:41.896040 7784 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:41.896151 master-0 kubenswrapper[7784]: I0223 13:02:41.896068 7784 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:42.637014 master-0 kubenswrapper[7784]: I0223 13:02:42.636905 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 23 13:02:42.637014 master-0 kubenswrapper[7784]: I0223 13:02:42.636996 7784 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7" exitCode=137 Feb 23 13:02:42.637457 master-0 kubenswrapper[7784]: I0223 13:02:42.637083 7784 scope.go:117] "RemoveContainer" containerID="b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4" Feb 23 13:02:42.637457 master-0 kubenswrapper[7784]: I0223 13:02:42.637135 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:02:42.662496 master-0 kubenswrapper[7784]: I0223 13:02:42.661557 7784 scope.go:117] "RemoveContainer" containerID="09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7" Feb 23 13:02:42.683542 master-0 kubenswrapper[7784]: I0223 13:02:42.683465 7784 scope.go:117] "RemoveContainer" containerID="b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4" Feb 23 13:02:42.684241 master-0 kubenswrapper[7784]: E0223 13:02:42.684169 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4\": container with ID starting with b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4 not found: ID does not exist" containerID="b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4" Feb 23 13:02:42.684383 master-0 kubenswrapper[7784]: I0223 13:02:42.684238 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4"} err="failed to get container status \"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4\": rpc error: code = NotFound desc = could not find container \"b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4\": container with ID starting with b95c6d76f8422f6ef49b92a37ee9a460cf7dc7dbeb154e4753139b9889b5b5d4 not found: ID does not exist" Feb 23 13:02:42.684383 master-0 kubenswrapper[7784]: I0223 13:02:42.684280 7784 scope.go:117] "RemoveContainer" containerID="09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7" Feb 23 13:02:42.684883 master-0 kubenswrapper[7784]: E0223 13:02:42.684812 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7\": container with ID starting with 09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7 not found: ID does not exist" containerID="09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7" Feb 23 13:02:42.684976 master-0 kubenswrapper[7784]: I0223 13:02:42.684882 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7"} err="failed to get container status \"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7\": rpc error: code = NotFound desc = could not find container \"09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7\": container with ID starting with 09e0902afe49019bf42dc2315876e3a150f916939597d6b8debdcba7fb534cd7 not found: ID does not exist" Feb 23 13:02:43.520385 master-0 kubenswrapper[7784]: I0223 13:02:43.520239 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 23 13:02:43.521169 master-0 kubenswrapper[7784]: I0223 13:02:43.520686 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:02:45.668333 master-0 kubenswrapper[7784]: E0223 13:02:45.667886 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e1c323656f82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:11.645591426 +0000 UTC m=+74.380445079,LastTimestamp:2026-02-23 13:02:11.645591426 +0000 UTC m=+74.380445079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:02:49.290081 master-0 kubenswrapper[7784]: I0223 13:02:49.289900 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:49.393595 master-0 kubenswrapper[7784]: E0223 13:02:49.393495 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:49.973832 master-0 kubenswrapper[7784]: E0223 13:02:49.973730 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:50.836376 master-0 kubenswrapper[7784]: I0223 13:02:50.836246 7784 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-rlbcj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 23 13:02:50.837332 master-0 kubenswrapper[7784]: I0223 13:02:50.836440 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 23 13:02:52.617287 master-0 kubenswrapper[7784]: E0223 13:02:52.617162 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:53.710573 master-0 kubenswrapper[7784]: I0223 13:02:53.710471 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="54831b6236c4e39cb7ee1d061f9dcd71b81fd654a26ceb27bb0db7808c016243" exitCode=0 Feb 23 13:02:53.713143 master-0 kubenswrapper[7784]: I0223 13:02:53.713040 7784 generic.go:334] "Generic (PLEG): container finished" podID="71cb2f21-6d27-411f-9c2f-d5fa286895a7" containerID="44ecc8bd157550465c3780c8f90979b8897639b6eed19a94cadcc31f44d1bf1b" exitCode=0 Feb 23 13:02:55.729529 master-0 kubenswrapper[7784]: I0223 13:02:55.729425 7784 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="7cf32cc15b30cd0a472deb261e78baeaf04608bdbd83cf83d235fb4d4ea8600c" exitCode=0 Feb 23 13:02:57.744304 master-0 kubenswrapper[7784]: I0223 13:02:57.744197 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-zr6kv_18386753-ec74-456d-838d-98c07c169b4b/approver/0.log" Feb 23 13:02:57.745611 master-0 kubenswrapper[7784]: I0223 13:02:57.744742 7784 generic.go:334] "Generic (PLEG): container finished" podID="18386753-ec74-456d-838d-98c07c169b4b" containerID="d01166f75613e8876ca557628e42fc7b26709f163770565d233c3c09b10f65ff" exitCode=1 Feb 23 13:02:59.394815 master-0 kubenswrapper[7784]: E0223 13:02:59.394702 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:59.759570 master-0 kubenswrapper[7784]: I0223 13:02:59.759499 7784 generic.go:334] "Generic (PLEG): container finished" podID="3a6b0d84-a344-43e4-b9c4-c8e0670528de" containerID="f41bbfdb7f3332d7cf43817f8495af6ada5a69e9698540f12848e6c0a2e50947" exitCode=0 Feb 23 13:02:59.762498 master-0 kubenswrapper[7784]: I0223 13:02:59.762322 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-q7q5x_0d7c1ea0-e3c1-4494-bb27-058200b93ed7/network-operator/0.log" Feb 23 13:02:59.762498 master-0 kubenswrapper[7784]: I0223 13:02:59.762458 7784 generic.go:334] "Generic (PLEG): container finished" podID="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" containerID="eb968c3314cb31b6e0492300e6336271f0112ff545f49715e98a1fe86c9c31d2" exitCode=255 Feb 23 13:02:59.974708 master-0 kubenswrapper[7784]: E0223 13:02:59.974625 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:00.836612 master-0 kubenswrapper[7784]: I0223 13:03:00.836197 7784 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-rlbcj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 23 13:03:00.836612 master-0 kubenswrapper[7784]: I0223 13:03:00.836273 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 23 13:03:01.463861 master-0 kubenswrapper[7784]: E0223 13:03:01.463779 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[package-server-manager-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" podUID="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Feb 23 13:03:06.470637 master-0 kubenswrapper[7784]: I0223 13:03:06.470564 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:03:06.472148 master-0 kubenswrapper[7784]: E0223 13:03:06.470836 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:03:06.472424 master-0 kubenswrapper[7784]: E0223 13:03:06.472395 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:05:08.472330226 +0000 UTC m=+251.207183909 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:03:06.803606 master-0 kubenswrapper[7784]: I0223 13:03:06.803512 7784 generic.go:334] "Generic (PLEG): container finished" podID="3daf0176-92e7-4642-8643-4afbefb77235" containerID="f205f47da789bb0655eaefd3fc629901d18927b18577bd859aed40fe66e3e22f" exitCode=0 Feb 23 13:03:08.815937 master-0 kubenswrapper[7784]: I0223 13:03:08.815848 7784 generic.go:334] "Generic (PLEG): container finished" podID="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" containerID="3d99d0c2bd6be47ab909ce0f360a9cd7297541119cb33654550886e7ec757dd2" exitCode=0 Feb 23 13:03:09.396292 master-0 kubenswrapper[7784]: E0223 13:03:09.396172 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:09.396292 master-0 kubenswrapper[7784]: I0223 13:03:09.396238 7784 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:03:09.823222 master-0 kubenswrapper[7784]: I0223 13:03:09.823121 7784 generic.go:334] "Generic (PLEG): container finished" podID="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" containerID="bf50e58fb96262a2da0270150de3bc7ed1ff7e9dd4f82079fe11e7f3e00ec9c7" exitCode=0 Feb 23 13:03:09.975199 master-0 kubenswrapper[7784]: E0223 13:03:09.975082 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:09.975199 master-0 kubenswrapper[7784]: E0223 13:03:09.975162 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:03:10.836208 master-0 kubenswrapper[7784]: I0223 13:03:10.836141 7784 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-rlbcj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 23 13:03:10.836720 master-0 kubenswrapper[7784]: I0223 13:03:10.836242 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 23 13:03:12.255064 master-0 kubenswrapper[7784]: I0223 13:03:12.254939 7784 status_manager.go:851] "Failed to get status for pod" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 23 13:03:13.848748 master-0 kubenswrapper[7784]: I0223 13:03:13.848604 7784 generic.go:334] "Generic (PLEG): container finished" podID="7d0a976c-1492-4989-a5ff-e386564dd6ba" containerID="c355e2c1c4f0e97e7c52c65af1c7679e829d5cd786200eccdf8b33d7cd15372a" exitCode=0 Feb 23 13:03:17.524120 master-0 kubenswrapper[7784]: E0223 13:03:17.524067 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:03:17.525178 master-0 kubenswrapper[7784]: E0223 13:03:17.525089 7784 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.011s" Feb 23 13:03:17.525289 master-0 kubenswrapper[7784]: I0223 13:03:17.525204 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:03:17.526013 master-0 kubenswrapper[7784]: I0223 13:03:17.525963 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 13:03:17.526112 master-0 kubenswrapper[7784]: I0223 13:03:17.526040 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae" gracePeriod=30 Feb 23 13:03:17.538383 master-0 kubenswrapper[7784]: I0223 13:03:17.538289 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:03:17.878497 master-0 kubenswrapper[7784]: I0223 13:03:17.878277 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae" exitCode=2 Feb 23 13:03:17.880882 master-0 kubenswrapper[7784]: I0223 13:03:17.880815 7784 generic.go:334] "Generic (PLEG): container finished" podID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerID="4bf0acfb1627fed2922b1ade4afb1172158564f4516d958d55b369d98f788765" exitCode=0 Feb 23 13:03:19.396774 master-0 kubenswrapper[7784]: E0223 13:03:19.396679 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 23 13:03:19.671166 master-0 kubenswrapper[7784]: E0223 13:03:19.670903 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-686847ff5f-pqjsm.1896e1c348b0d5a8 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-686847ff5f-pqjsm,UID:e5802841-52dc-4d15-a252-0eac70e9fbbc,APIVersion:v1,ResourceVersion:8733,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\" in 1.956s (1.956s including waiting). Image size: 470575802 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:12.271289768 +0000 UTC m=+75.006143401,LastTimestamp:2026-02-23 13:02:12.271289768 +0000 UTC m=+75.006143401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:03:22.916955 master-0 kubenswrapper[7784]: I0223 13:03:22.916809 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_4bad4fd9-074b-4a4e-8af9-50bdc4be09df/installer/0.log" Feb 23 13:03:22.916955 master-0 kubenswrapper[7784]: I0223 13:03:22.916916 7784 generic.go:334] "Generic (PLEG): container finished" podID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerID="25f5f27094e3f4980d0a60ff68ca18311759b6f56fac8ff763cd8f4150a673af" exitCode=1 Feb 23 13:03:27.949119 master-0 kubenswrapper[7784]: I0223 13:03:27.948951 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_283fd2f4-771b-4592-a143-b7e3a5ed6765/installer/0.log" Feb 23 13:03:27.949119 master-0 kubenswrapper[7784]: I0223 13:03:27.949027 7784 generic.go:334] "Generic (PLEG): container finished" podID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerID="698f0709a0bf6365bf7afb4765b93fe2fefc787772f82b5103295a5f25bae796" exitCode=1 Feb 23 13:03:29.598099 master-0 kubenswrapper[7784]: E0223 13:03:29.597903 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 23 13:03:30.232528 master-0 kubenswrapper[7784]: E0223 13:03:30.232214 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:39.999385 master-0 kubenswrapper[7784]: E0223 13:03:39.999242 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 23 13:03:40.233258 master-0 kubenswrapper[7784]: E0223 13:03:40.233144 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:50.234767 master-0 kubenswrapper[7784]: E0223 13:03:50.234604 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:50.800538 master-0 kubenswrapper[7784]: E0223 13:03:50.800398 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 23 13:03:51.545282 master-0 kubenswrapper[7784]: E0223 13:03:51.544412 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:03:51.546263 master-0 kubenswrapper[7784]: E0223 13:03:51.545571 7784 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.02s" Feb 23 13:03:51.562630 master-0 kubenswrapper[7784]: I0223 13:03:51.562546 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:03:53.674370 master-0 kubenswrapper[7784]: E0223 13:03:53.674134 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-686847ff5f-pqjsm.1896e1c350b30c15 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-686847ff5f-pqjsm,UID:e5802841-52dc-4d15-a252-0eac70e9fbbc,APIVersion:v1,ResourceVersion:8733,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Created,Message:Created container: control-plane-machine-set-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:12.405652501 +0000 UTC m=+75.140506154,LastTimestamp:2026-02-23 13:02:12.405652501 +0000 UTC m=+75.140506154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:04:00.150125 master-0 kubenswrapper[7784]: I0223 13:04:00.150018 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/1.log" Feb 23 13:04:00.151580 master-0 kubenswrapper[7784]: I0223 13:04:00.151503 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/0.log" Feb 23 13:04:00.151736 master-0 kubenswrapper[7784]: I0223 13:04:00.151669 7784 generic.go:334] "Generic (PLEG): container finished" podID="d71885db-c29e-429a-aa1f-1c274796a69f" containerID="114dcbc6fdab8f4038f5c9cf10d758f8abe2dd1b2791ef3c7f6e715028e0da39" exitCode=255 Feb 23 13:04:00.235843 master-0 kubenswrapper[7784]: E0223 13:04:00.235733 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:01.972428 master-0 kubenswrapper[7784]: I0223 13:04:01.972267 7784 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-ql2nl container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.29:8081/healthz\": dial tcp 10.128.0.29:8081: connect: connection refused" start-of-body= Feb 23 13:04:01.973277 master-0 kubenswrapper[7784]: I0223 13:04:01.972477 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podUID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.29:8081/healthz\": dial tcp 10.128.0.29:8081: connect: connection refused" Feb 23 13:04:01.973277 master-0 kubenswrapper[7784]: I0223 13:04:01.972697 7784 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-ql2nl container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" start-of-body= Feb 23 13:04:01.973277 master-0 kubenswrapper[7784]: I0223 13:04:01.972752 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podUID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" Feb 23 13:04:02.166274 master-0 kubenswrapper[7784]: I0223 13:04:02.166146 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-ql2nl_6ff7868e-f0d3-4c63-901f-fed11d623cf1/manager/0.log" Feb 23 13:04:02.166274 master-0 kubenswrapper[7784]: I0223 13:04:02.166193 7784 generic.go:334] "Generic (PLEG): container finished" podID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerID="4cc3ecc5feacb9f931479e4483246f1ec0ef16491cc14ad9cd0c596a2b97f27d" exitCode=1 Feb 23 13:04:02.402094 master-0 kubenswrapper[7784]: E0223 13:04:02.401990 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 23 13:04:02.980123 master-0 kubenswrapper[7784]: I0223 13:04:02.979999 7784 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-cqmh7 container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.30:8081/healthz\": dial tcp 10.128.0.30:8081: connect: connection refused" start-of-body= Feb 23 13:04:02.980123 master-0 kubenswrapper[7784]: I0223 13:04:02.980040 7784 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-cqmh7 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" start-of-body= Feb 23 13:04:02.981122 master-0 kubenswrapper[7784]: I0223 13:04:02.980124 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podUID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" Feb 23 13:04:02.981122 master-0 kubenswrapper[7784]: I0223 13:04:02.980122 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podUID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.30:8081/healthz\": dial tcp 10.128.0.30:8081: connect: connection refused" Feb 23 13:04:03.175162 master-0 kubenswrapper[7784]: I0223 13:04:03.174976 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/manager/0.log" Feb 23 13:04:03.175779 master-0 kubenswrapper[7784]: I0223 13:04:03.175708 7784 generic.go:334] "Generic (PLEG): container finished" podID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerID="d249745523695601f887a8698e1ad99347f7c0f390b57c191ff627979ced32b8" exitCode=1 Feb 23 13:04:10.236896 master-0 kubenswrapper[7784]: E0223 13:04:10.236759 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:10.236896 master-0 kubenswrapper[7784]: E0223 13:04:10.236807 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:04:11.973501 master-0 kubenswrapper[7784]: I0223 13:04:11.973325 7784 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-ql2nl container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" start-of-body= Feb 23 13:04:11.973501 master-0 kubenswrapper[7784]: I0223 13:04:11.973502 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podUID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" Feb 23 13:04:12.226203 master-0 kubenswrapper[7784]: I0223 13:04:12.225965 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/0.log" Feb 23 13:04:12.226203 master-0 kubenswrapper[7784]: I0223 13:04:12.226049 7784 generic.go:334] "Generic (PLEG): container finished" podID="878aa813-a8b9-4a6f-8086-778df276d0d7" containerID="c8289f028a5b9b2ff9bd84ee035e05cf3ab1f61b8019dd41bc447fe370637ef6" exitCode=1 Feb 23 13:04:12.260402 master-0 kubenswrapper[7784]: I0223 13:04:12.260209 7784 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Feb 23 13:04:12.979859 master-0 kubenswrapper[7784]: I0223 13:04:12.979718 7784 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-cqmh7 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" start-of-body= Feb 23 13:04:12.980932 master-0 kubenswrapper[7784]: I0223 13:04:12.979878 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podUID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" Feb 23 13:04:14.240920 master-0 kubenswrapper[7784]: I0223 13:04:14.240746 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/0.log" Feb 23 13:04:14.240920 master-0 kubenswrapper[7784]: I0223 13:04:14.240820 7784 generic.go:334] "Generic (PLEG): container finished" podID="5793184d-de96-49ad-a060-0fa0cf278a9c" containerID="8dbfb3a49d15de4419fc29dce0193ff2a8f2f1238053d11c98101bb8a51adb15" exitCode=1 Feb 23 13:04:15.604170 master-0 kubenswrapper[7784]: E0223 13:04:15.603900 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 23 13:04:16.254771 master-0 kubenswrapper[7784]: I0223 13:04:16.254606 7784 generic.go:334] "Generic (PLEG): container finished" podID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerID="e4ed838542af022eb9712b2516ce0b1c3c0ca74d3f39f916a6f32d58ec0e24c3" exitCode=0 Feb 23 13:04:19.277826 master-0 kubenswrapper[7784]: I0223 13:04:19.277724 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="708c16b81b0264d53e4f4fa259e09481e563a2bc5a1dbd63e658d7489a2758a3" exitCode=1 Feb 23 13:04:20.564989 master-0 kubenswrapper[7784]: I0223 13:04:20.564879 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:20.564989 master-0 kubenswrapper[7784]: I0223 13:04:20.564972 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:20.565886 master-0 kubenswrapper[7784]: I0223 13:04:20.565172 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:20.565886 master-0 kubenswrapper[7784]: I0223 13:04:20.565260 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:21.972934 master-0 kubenswrapper[7784]: I0223 13:04:21.972870 7784 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-ql2nl container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.29:8081/healthz\": dial tcp 10.128.0.29:8081: connect: connection refused" start-of-body= Feb 23 13:04:21.974035 master-0 kubenswrapper[7784]: I0223 13:04:21.973233 7784 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-ql2nl container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" start-of-body= Feb 23 13:04:21.974035 master-0 kubenswrapper[7784]: I0223 13:04:21.973830 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podUID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.29:8081/readyz\": dial tcp 10.128.0.29:8081: connect: connection refused" Feb 23 13:04:21.974035 master-0 kubenswrapper[7784]: I0223 13:04:21.973725 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" podUID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.29:8081/healthz\": dial tcp 10.128.0.29:8081: connect: connection refused" Feb 23 13:04:22.980512 master-0 kubenswrapper[7784]: I0223 13:04:22.980331 7784 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-cqmh7 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" start-of-body= Feb 23 13:04:22.980512 master-0 kubenswrapper[7784]: I0223 13:04:22.980462 7784 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-cqmh7 container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.30:8081/healthz\": dial tcp 10.128.0.30:8081: connect: connection refused" start-of-body= Feb 23 13:04:22.981562 master-0 kubenswrapper[7784]: I0223 13:04:22.980531 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podUID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.30:8081/readyz\": dial tcp 10.128.0.30:8081: connect: connection refused" Feb 23 13:04:22.981562 master-0 kubenswrapper[7784]: I0223 13:04:22.980590 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" podUID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.30:8081/healthz\": dial tcp 10.128.0.30:8081: connect: connection refused" Feb 23 13:04:25.566298 master-0 kubenswrapper[7784]: E0223 13:04:25.566127 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: E0223 13:04:25.566684 7784 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.566731 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.566845 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.566873 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"54831b6236c4e39cb7ee1d061f9dcd71b81fd654a26ceb27bb0db7808c016243"} Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.566921 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" event={"ID":"71cb2f21-6d27-411f-9c2f-d5fa286895a7","Type":"ContainerDied","Data":"44ecc8bd157550465c3780c8f90979b8897639b6eed19a94cadcc31f44d1bf1b"} Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.566947 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerDied","Data":"7cf32cc15b30cd0a472deb261e78baeaf04608bdbd83cf83d235fb4d4ea8600c"} Feb 23 13:04:25.567686 master-0 kubenswrapper[7784]: I0223 13:04:25.567015 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:04:25.568226 master-0 kubenswrapper[7784]: I0223 13:04:25.568157 7784 scope.go:117] "RemoveContainer" containerID="4bf0acfb1627fed2922b1ade4afb1172158564f4516d958d55b369d98f788765" Feb 23 13:04:25.568630 master-0 kubenswrapper[7784]: I0223 13:04:25.568478 7784 scope.go:117] "RemoveContainer" containerID="44ecc8bd157550465c3780c8f90979b8897639b6eed19a94cadcc31f44d1bf1b" Feb 23 13:04:25.568731 master-0 kubenswrapper[7784]: I0223 13:04:25.568683 7784 scope.go:117] "RemoveContainer" containerID="7cf32cc15b30cd0a472deb261e78baeaf04608bdbd83cf83d235fb4d4ea8600c" Feb 23 13:04:25.568884 master-0 kubenswrapper[7784]: I0223 13:04:25.568787 7784 scope.go:117] "RemoveContainer" containerID="4cc3ecc5feacb9f931479e4483246f1ec0ef16491cc14ad9cd0c596a2b97f27d" Feb 23 13:04:25.569066 master-0 kubenswrapper[7784]: I0223 13:04:25.569009 7784 scope.go:117] "RemoveContainer" containerID="d249745523695601f887a8698e1ad99347f7c0f390b57c191ff627979ced32b8" Feb 23 13:04:25.589935 master-0 kubenswrapper[7784]: I0223 13:04:25.589680 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:04:26.324411 master-0 kubenswrapper[7784]: I0223 13:04:26.324289 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-ql2nl_6ff7868e-f0d3-4c63-901f-fed11d623cf1/manager/0.log" Feb 23 13:04:26.327998 master-0 kubenswrapper[7784]: I0223 13:04:26.327976 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/manager/0.log" Feb 23 13:04:27.678315 master-0 kubenswrapper[7784]: E0223 13:04:27.678080 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-686847ff5f-pqjsm.1896e1c3517fcd73 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-686847ff5f-pqjsm,UID:e5802841-52dc-4d15-a252-0eac70e9fbbc,APIVersion:v1,ResourceVersion:8733,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Started,Message:Started container control-plane-machine-set-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:12.419071347 +0000 UTC m=+75.153924980,LastTimestamp:2026-02-23 13:02:12.419071347 +0000 UTC m=+75.153924980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:04:30.547291 master-0 kubenswrapper[7784]: E0223 13:04:30.545148 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Feb 23 13:04:30.566309 master-0 kubenswrapper[7784]: I0223 13:04:30.566212 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:30.566309 master-0 kubenswrapper[7784]: I0223 13:04:30.566200 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:30.566622 master-0 kubenswrapper[7784]: I0223 13:04:30.566323 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:30.566622 master-0 kubenswrapper[7784]: I0223 13:04:30.566335 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:32.004756 master-0 kubenswrapper[7784]: E0223 13:04:32.004621 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:04:36.402099 master-0 kubenswrapper[7784]: I0223 13:04:36.401961 7784 generic.go:334] "Generic (PLEG): container finished" podID="d7c80f4d-6b28-44f4-beef-01e705260452" containerID="2187448d7b4208e3e1befa756c107826cc44935cd19819e30170d5f0d754f882" exitCode=0 Feb 23 13:04:38.579838 master-0 kubenswrapper[7784]: E0223 13:04:38.579712 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:04:40.548323 master-0 kubenswrapper[7784]: E0223 13:04:40.548236 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:40.565330 master-0 kubenswrapper[7784]: I0223 13:04:40.565211 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:40.565768 master-0 kubenswrapper[7784]: I0223 13:04:40.565373 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:40.565768 master-0 kubenswrapper[7784]: I0223 13:04:40.565405 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:40.565768 master-0 kubenswrapper[7784]: I0223 13:04:40.565533 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:49.006743 master-0 kubenswrapper[7784]: E0223 13:04:49.006590 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:04:49.518030 master-0 kubenswrapper[7784]: I0223 13:04:49.517929 7784 generic.go:334] "Generic (PLEG): container finished" podID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerID="12929995bc4c469f6a1c977c1403bda9305b2b652d95308c622e2a38faae5fab" exitCode=0 Feb 23 13:04:50.550590 master-0 kubenswrapper[7784]: E0223 13:04:50.550481 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:50.566143 master-0 kubenswrapper[7784]: I0223 13:04:50.566046 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:04:50.566471 master-0 kubenswrapper[7784]: I0223 13:04:50.566162 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:04:51.020823 master-0 kubenswrapper[7784]: I0223 13:04:51.020686 7784 patch_prober.go:28] interesting pod/controller-manager-69f44bb786-4zj6n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.46:8443/healthz\": dial tcp 10.128.0.46:8443: connect: connection refused" start-of-body= Feb 23 13:04:51.020823 master-0 kubenswrapper[7784]: I0223 13:04:51.020764 7784 patch_prober.go:28] interesting pod/controller-manager-69f44bb786-4zj6n container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.46:8443/healthz\": dial tcp 10.128.0.46:8443: connect: connection refused" start-of-body= Feb 23 13:04:51.021145 master-0 kubenswrapper[7784]: I0223 13:04:51.020837 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.46:8443/healthz\": dial tcp 10.128.0.46:8443: connect: connection refused" Feb 23 13:04:51.021145 master-0 kubenswrapper[7784]: I0223 13:04:51.020897 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.46:8443/healthz\": dial tcp 10.128.0.46:8443: connect: connection refused" Feb 23 13:04:59.594156 master-0 kubenswrapper[7784]: E0223 13:04:59.594024 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:04:59.595293 master-0 kubenswrapper[7784]: E0223 13:04:59.594404 7784 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.027s" Feb 23 13:04:59.595293 master-0 kubenswrapper[7784]: I0223 13:04:59.594461 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:04:59.595293 master-0 kubenswrapper[7784]: I0223 13:04:59.594581 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:04:59.595293 master-0 kubenswrapper[7784]: I0223 13:04:59.594612 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-zr6kv" event={"ID":"18386753-ec74-456d-838d-98c07c169b4b","Type":"ContainerDied","Data":"d01166f75613e8876ca557628e42fc7b26709f163770565d233c3c09b10f65ff"} Feb 23 13:04:59.595293 master-0 kubenswrapper[7784]: I0223 13:04:59.594669 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:04:59.595835 master-0 kubenswrapper[7784]: I0223 13:04:59.595704 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:04:59.595941 master-0 kubenswrapper[7784]: I0223 13:04:59.595854 7784 scope.go:117] "RemoveContainer" containerID="e4ed838542af022eb9712b2516ce0b1c3c0ca74d3f39f916a6f32d58ec0e24c3" Feb 23 13:04:59.596415 master-0 kubenswrapper[7784]: I0223 13:04:59.596366 7784 scope.go:117] "RemoveContainer" containerID="c8289f028a5b9b2ff9bd84ee035e05cf3ab1f61b8019dd41bc447fe370637ef6" Feb 23 13:04:59.596925 master-0 kubenswrapper[7784]: I0223 13:04:59.596844 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:04:59.597075 master-0 kubenswrapper[7784]: I0223 13:04:59.597033 7784 scope.go:117] "RemoveContainer" containerID="eb968c3314cb31b6e0492300e6336271f0112ff545f49715e98a1fe86c9c31d2" Feb 23 13:04:59.597932 master-0 kubenswrapper[7784]: I0223 13:04:59.597238 7784 scope.go:117] "RemoveContainer" containerID="8dbfb3a49d15de4419fc29dce0193ff2a8f2f1238053d11c98101bb8a51adb15" Feb 23 13:04:59.597932 master-0 kubenswrapper[7784]: I0223 13:04:59.597391 7784 scope.go:117] "RemoveContainer" containerID="3d99d0c2bd6be47ab909ce0f360a9cd7297541119cb33654550886e7ec757dd2" Feb 23 13:04:59.598293 master-0 kubenswrapper[7784]: I0223 13:04:59.598225 7784 scope.go:117] "RemoveContainer" containerID="bf50e58fb96262a2da0270150de3bc7ed1ff7e9dd4f82079fe11e7f3e00ec9c7" Feb 23 13:04:59.599837 master-0 kubenswrapper[7784]: I0223 13:04:59.599766 7784 scope.go:117] "RemoveContainer" containerID="d01166f75613e8876ca557628e42fc7b26709f163770565d233c3c09b10f65ff" Feb 23 13:04:59.600737 master-0 kubenswrapper[7784]: I0223 13:04:59.600698 7784 scope.go:117] "RemoveContainer" containerID="708c16b81b0264d53e4f4fa259e09481e563a2bc5a1dbd63e658d7489a2758a3" Feb 23 13:04:59.601924 master-0 kubenswrapper[7784]: I0223 13:04:59.601906 7784 scope.go:117] "RemoveContainer" containerID="c355e2c1c4f0e97e7c52c65af1c7679e829d5cd786200eccdf8b33d7cd15372a" Feb 23 13:04:59.602275 master-0 kubenswrapper[7784]: I0223 13:04:59.602169 7784 scope.go:117] "RemoveContainer" containerID="114dcbc6fdab8f4038f5c9cf10d758f8abe2dd1b2791ef3c7f6e715028e0da39" Feb 23 13:04:59.602275 master-0 kubenswrapper[7784]: I0223 13:04:59.602210 7784 scope.go:117] "RemoveContainer" containerID="f41bbfdb7f3332d7cf43817f8495af6ada5a69e9698540f12848e6c0a2e50947" Feb 23 13:04:59.602571 master-0 kubenswrapper[7784]: I0223 13:04:59.602517 7784 scope.go:117] "RemoveContainer" containerID="2187448d7b4208e3e1befa756c107826cc44935cd19819e30170d5f0d754f882" Feb 23 13:04:59.603434 master-0 kubenswrapper[7784]: I0223 13:04:59.603371 7784 scope.go:117] "RemoveContainer" containerID="12929995bc4c469f6a1c977c1403bda9305b2b652d95308c622e2a38faae5fab" Feb 23 13:04:59.604516 master-0 kubenswrapper[7784]: I0223 13:04:59.604245 7784 scope.go:117] "RemoveContainer" containerID="f205f47da789bb0655eaefd3fc629901d18927b18577bd859aed40fe66e3e22f" Feb 23 13:04:59.628113 master-0 kubenswrapper[7784]: I0223 13:04:59.628042 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:05:00.013793 master-0 kubenswrapper[7784]: I0223 13:05:00.008007 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_4bad4fd9-074b-4a4e-8af9-50bdc4be09df/installer/0.log" Feb 23 13:05:00.013793 master-0 kubenswrapper[7784]: I0223 13:05:00.008103 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:05:00.030966 master-0 kubenswrapper[7784]: I0223 13:05:00.030922 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_283fd2f4-771b-4592-a143-b7e3a5ed6765/installer/0.log" Feb 23 13:05:00.031083 master-0 kubenswrapper[7784]: I0223 13:05:00.031005 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046460 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir\") pod \"283fd2f4-771b-4592-a143-b7e3a5ed6765\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046525 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock\") pod \"283fd2f4-771b-4592-a143-b7e3a5ed6765\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046579 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir\") pod \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046622 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access\") pod \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046742 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access\") pod \"283fd2f4-771b-4592-a143-b7e3a5ed6765\" (UID: \"283fd2f4-771b-4592-a143-b7e3a5ed6765\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.046765 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock\") pod \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\" (UID: \"4bad4fd9-074b-4a4e-8af9-50bdc4be09df\") " Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.047138 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock" (OuterVolumeSpecName: "var-lock") pod "4bad4fd9-074b-4a4e-8af9-50bdc4be09df" (UID: "4bad4fd9-074b-4a4e-8af9-50bdc4be09df"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.047189 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "283fd2f4-771b-4592-a143-b7e3a5ed6765" (UID: "283fd2f4-771b-4592-a143-b7e3a5ed6765"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.047210 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock" (OuterVolumeSpecName: "var-lock") pod "283fd2f4-771b-4592-a143-b7e3a5ed6765" (UID: "283fd2f4-771b-4592-a143-b7e3a5ed6765"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:05:00.048976 master-0 kubenswrapper[7784]: I0223 13:05:00.047232 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bad4fd9-074b-4a4e-8af9-50bdc4be09df" (UID: "4bad4fd9-074b-4a4e-8af9-50bdc4be09df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:05:00.052414 master-0 kubenswrapper[7784]: I0223 13:05:00.050439 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bad4fd9-074b-4a4e-8af9-50bdc4be09df" (UID: "4bad4fd9-074b-4a4e-8af9-50bdc4be09df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:05:00.064606 master-0 kubenswrapper[7784]: I0223 13:05:00.052830 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "283fd2f4-771b-4592-a143-b7e3a5ed6765" (UID: "283fd2f4-771b-4592-a143-b7e3a5ed6765"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147911 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147943 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147953 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/283fd2f4-771b-4592-a143-b7e3a5ed6765-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147962 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4bad4fd9-074b-4a4e-8af9-50bdc4be09df-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147972 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.148615 master-0 kubenswrapper[7784]: I0223 13:05:00.147981 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/283fd2f4-771b-4592-a143-b7e3a5ed6765-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:00.550991 master-0 kubenswrapper[7784]: E0223 13:05:00.550907 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:00.643082 master-0 kubenswrapper[7784]: I0223 13:05:00.642971 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_283fd2f4-771b-4592-a143-b7e3a5ed6765/installer/0.log" Feb 23 13:05:00.643747 master-0 kubenswrapper[7784]: I0223 13:05:00.643702 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:05:00.646099 master-0 kubenswrapper[7784]: I0223 13:05:00.646047 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/0.log" Feb 23 13:05:00.651138 master-0 kubenswrapper[7784]: I0223 13:05:00.651106 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/0.log" Feb 23 13:05:00.655815 master-0 kubenswrapper[7784]: I0223 13:05:00.655740 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-zr6kv_18386753-ec74-456d-838d-98c07c169b4b/approver/0.log" Feb 23 13:05:00.658754 master-0 kubenswrapper[7784]: I0223 13:05:00.658494 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/1.log" Feb 23 13:05:00.659442 master-0 kubenswrapper[7784]: I0223 13:05:00.659208 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/0.log" Feb 23 13:05:00.672144 master-0 kubenswrapper[7784]: I0223 13:05:00.672108 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-q7q5x_0d7c1ea0-e3c1-4494-bb27-058200b93ed7/network-operator/0.log" Feb 23 13:05:00.682902 master-0 kubenswrapper[7784]: I0223 13:05:00.682877 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_4bad4fd9-074b-4a4e-8af9-50bdc4be09df/installer/0.log" Feb 23 13:05:00.683124 master-0 kubenswrapper[7784]: I0223 13:05:00.683108 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:05:01.681934 master-0 kubenswrapper[7784]: E0223 13:05:01.681626 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1c61ffeef5a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:24.473444186 +0000 UTC m=+87.208297829,LastTimestamp:2026-02-23 13:02:24.473444186 +0000 UTC m=+87.208297829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:05:05.092576 master-0 kubenswrapper[7784]: E0223 13:05:05.092475 7784 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.496s" Feb 23 13:05:05.092576 master-0 kubenswrapper[7784]: I0223 13:05:05.092494 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" podStartSLOduration=174.135626167 podStartE2EDuration="2m56.09244881s" podCreationTimestamp="2026-02-23 13:02:09 +0000 UTC" firstStartedPulling="2026-02-23 13:02:10.314453784 +0000 UTC m=+73.049307467" lastFinishedPulling="2026-02-23 13:02:12.271276467 +0000 UTC m=+75.006130110" observedRunningTime="2026-02-23 13:05:05.091740853 +0000 UTC m=+247.826594526" watchObservedRunningTime="2026-02-23 13:05:05.09244881 +0000 UTC m=+247.827302483" Feb 23 13:05:05.093497 master-0 kubenswrapper[7784]: I0223 13:05:05.092605 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:05:05.102568 master-0 kubenswrapper[7784]: I0223 13:05:05.102519 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:05:05.105473 master-0 kubenswrapper[7784]: I0223 13:05:05.105441 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:05:05.105563 master-0 kubenswrapper[7784]: I0223 13:05:05.105485 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:05.105563 master-0 kubenswrapper[7784]: I0223 13:05:05.105500 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:05.105563 master-0 kubenswrapper[7784]: I0223 13:05:05.105517 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:05:05.105563 master-0 kubenswrapper[7784]: I0223 13:05:05.105535 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:05:05.105563 master-0 kubenswrapper[7784]: I0223 13:05:05.105551 7784 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="032433c1-3524-4e61-a339-acae46e14f54" Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105637 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105656 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105672 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" event={"ID":"3a6b0d84-a344-43e4-b9c4-c8e0670528de","Type":"ContainerDied","Data":"f41bbfdb7f3332d7cf43817f8495af6ada5a69e9698540f12848e6c0a2e50947"} Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105698 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" event={"ID":"0d7c1ea0-e3c1-4494-bb27-058200b93ed7","Type":"ContainerDied","Data":"eb968c3314cb31b6e0492300e6336271f0112ff545f49715e98a1fe86c9c31d2"} Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105731 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:05:05.105749 master-0 kubenswrapper[7784]: I0223 13:05:05.105744 7784 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="032433c1-3524-4e61-a339-acae46e14f54" Feb 23 13:05:05.105970 master-0 kubenswrapper[7784]: I0223 13:05:05.105878 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:05:05.106014 master-0 kubenswrapper[7784]: I0223 13:05:05.105953 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" event={"ID":"3daf0176-92e7-4642-8643-4afbefb77235","Type":"ContainerDied","Data":"f205f47da789bb0655eaefd3fc629901d18927b18577bd859aed40fe66e3e22f"} Feb 23 13:05:05.106062 master-0 kubenswrapper[7784]: I0223 13:05:05.106025 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" event={"ID":"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a","Type":"ContainerDied","Data":"3d99d0c2bd6be47ab909ce0f360a9cd7297541119cb33654550886e7ec757dd2"} Feb 23 13:05:05.106501 master-0 kubenswrapper[7784]: I0223 13:05:05.106444 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:05:05.106575 master-0 kubenswrapper[7784]: I0223 13:05:05.106552 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:05:05.106623 master-0 kubenswrapper[7784]: I0223 13:05:05.106576 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" event={"ID":"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9","Type":"ContainerDied","Data":"bf50e58fb96262a2da0270150de3bc7ed1ff7e9dd4f82079fe11e7f3e00ec9c7"} Feb 23 13:05:05.106623 master-0 kubenswrapper[7784]: I0223 13:05:05.106615 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" event={"ID":"7d0a976c-1492-4989-a5ff-e386564dd6ba","Type":"ContainerDied","Data":"c355e2c1c4f0e97e7c52c65af1c7679e829d5cd786200eccdf8b33d7cd15372a"} Feb 23 13:05:05.106784 master-0 kubenswrapper[7784]: I0223 13:05:05.106747 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:05:05.106836 master-0 kubenswrapper[7784]: I0223 13:05:05.106783 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae"} Feb 23 13:05:05.106895 master-0 kubenswrapper[7784]: I0223 13:05:05.106842 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" event={"ID":"f348bffa-b2f6-4695-88a7-923625e7fb02","Type":"ContainerDied","Data":"4bf0acfb1627fed2922b1ade4afb1172158564f4516d958d55b369d98f788765"} Feb 23 13:05:05.106947 master-0 kubenswrapper[7784]: I0223 13:05:05.106885 7784 scope.go:117] "RemoveContainer" containerID="745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae" Feb 23 13:05:05.106947 master-0 kubenswrapper[7784]: I0223 13:05:05.106894 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"708c16b81b0264d53e4f4fa259e09481e563a2bc5a1dbd63e658d7489a2758a3"} Feb 23 13:05:05.106947 master-0 kubenswrapper[7784]: I0223 13:05:05.106921 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4bad4fd9-074b-4a4e-8af9-50bdc4be09df","Type":"ContainerDied","Data":"25f5f27094e3f4980d0a60ff68ca18311759b6f56fac8ff763cd8f4150a673af"} Feb 23 13:05:05.107103 master-0 kubenswrapper[7784]: I0223 13:05:05.107016 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"283fd2f4-771b-4592-a143-b7e3a5ed6765","Type":"ContainerDied","Data":"698f0709a0bf6365bf7afb4765b93fe2fefc787772f82b5103295a5f25bae796"} Feb 23 13:05:05.107158 master-0 kubenswrapper[7784]: I0223 13:05:05.107129 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerDied","Data":"114dcbc6fdab8f4038f5c9cf10d758f8abe2dd1b2791ef3c7f6e715028e0da39"} Feb 23 13:05:05.107217 master-0 kubenswrapper[7784]: I0223 13:05:05.107179 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" event={"ID":"6ff7868e-f0d3-4c63-901f-fed11d623cf1","Type":"ContainerDied","Data":"4cc3ecc5feacb9f931479e4483246f1ec0ef16491cc14ad9cd0c596a2b97f27d"} Feb 23 13:05:05.107270 master-0 kubenswrapper[7784]: I0223 13:05:05.107234 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" event={"ID":"fce9f67d-0b27-41e3-ba4c-ed9cca25703e","Type":"ContainerDied","Data":"d249745523695601f887a8698e1ad99347f7c0f390b57c191ff627979ced32b8"} Feb 23 13:05:05.107316 master-0 kubenswrapper[7784]: I0223 13:05:05.107269 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerDied","Data":"c8289f028a5b9b2ff9bd84ee035e05cf3ab1f61b8019dd41bc447fe370637ef6"} Feb 23 13:05:05.107384 master-0 kubenswrapper[7784]: I0223 13:05:05.107306 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerDied","Data":"8dbfb3a49d15de4419fc29dce0193ff2a8f2f1238053d11c98101bb8a51adb15"} Feb 23 13:05:05.107429 master-0 kubenswrapper[7784]: I0223 13:05:05.107379 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerDied","Data":"e4ed838542af022eb9712b2516ce0b1c3c0ca74d3f39f916a6f32d58ec0e24c3"} Feb 23 13:05:05.107472 master-0 kubenswrapper[7784]: I0223 13:05:05.107423 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"708c16b81b0264d53e4f4fa259e09481e563a2bc5a1dbd63e658d7489a2758a3"} Feb 23 13:05:05.107510 master-0 kubenswrapper[7784]: I0223 13:05:05.107465 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" event={"ID":"f348bffa-b2f6-4695-88a7-923625e7fb02","Type":"ContainerStarted","Data":"41b6cf880f0434106198ae6902d613f03152e0b90a833b624b5b5abd53cc862c"} Feb 23 13:05:05.107510 master-0 kubenswrapper[7784]: I0223 13:05:05.107499 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" event={"ID":"6ff7868e-f0d3-4c63-901f-fed11d623cf1","Type":"ContainerStarted","Data":"fbe5741e9b406c3ecd9652422470a5294e436c406f9d144c524e0c3d4dbd2cf3"} Feb 23 13:05:05.107589 master-0 kubenswrapper[7784]: I0223 13:05:05.107530 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" event={"ID":"fce9f67d-0b27-41e3-ba4c-ed9cca25703e","Type":"ContainerStarted","Data":"f2fb7d31437eda1c8fb4c2d83032aa75e8a15bdb873d7434ea9f2c386578cf0b"} Feb 23 13:05:05.107589 master-0 kubenswrapper[7784]: I0223 13:05:05.107560 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" event={"ID":"71cb2f21-6d27-411f-9c2f-d5fa286895a7","Type":"ContainerStarted","Data":"eb70045488edd4d305ada0d7cb610f250370c665d6439dd4b54e7f904deaec6e"} Feb 23 13:05:05.107669 master-0 kubenswrapper[7784]: I0223 13:05:05.107593 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" event={"ID":"29126ab2-a689-4b0e-a1f4-4faed19b0fbc","Type":"ContainerStarted","Data":"c22b50714f93c20a20973f74461ab606cb934d36952f3c9f908035a7b7fde3f0"} Feb 23 13:05:05.107669 master-0 kubenswrapper[7784]: I0223 13:05:05.107630 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerDied","Data":"2187448d7b4208e3e1befa756c107826cc44935cd19819e30170d5f0d754f882"} Feb 23 13:05:05.107748 master-0 kubenswrapper[7784]: I0223 13:05:05.107670 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"4589ad8293a096ef0dc1448f5164370fffb1afeaee6d4e435bb6f7962c78df3c"} Feb 23 13:05:05.107748 master-0 kubenswrapper[7784]: I0223 13:05:05.107707 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"732718685ab155d2debc878135f1ee8c34bcbeb0432ce1b0e4f48e042448bb6f"} Feb 23 13:05:05.107748 master-0 kubenswrapper[7784]: I0223 13:05:05.107736 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"5b35e479450d4a6b40bd607a639c1e13f90b0357d49d26800e6c4e4d871bdc8e"} Feb 23 13:05:05.107892 master-0 kubenswrapper[7784]: I0223 13:05:05.107765 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"871cf7c1949a225e4e891402ce66b79bc70495b9c671e838bb0d1b8bd80d9387"} Feb 23 13:05:05.107892 master-0 kubenswrapper[7784]: I0223 13:05:05.107834 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"b7d0d3e2816f38acbe22b92b7481a331092bbc9bb66b8ca7d3c6ed43c771956e"} Feb 23 13:05:05.107892 master-0 kubenswrapper[7784]: I0223 13:05:05.107863 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerDied","Data":"12929995bc4c469f6a1c977c1403bda9305b2b652d95308c622e2a38faae5fab"} Feb 23 13:05:05.108006 master-0 kubenswrapper[7784]: I0223 13:05:05.107893 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"283fd2f4-771b-4592-a143-b7e3a5ed6765","Type":"ContainerDied","Data":"cbee83a28e4e85b2d4891dc24855eb2cc6165c6448a3273aa5f8a3ec8e2cf444"} Feb 23 13:05:05.108006 master-0 kubenswrapper[7784]: I0223 13:05:05.107926 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbee83a28e4e85b2d4891dc24855eb2cc6165c6448a3273aa5f8a3ec8e2cf444" Feb 23 13:05:05.108006 master-0 kubenswrapper[7784]: I0223 13:05:05.107954 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23"} Feb 23 13:05:05.108006 master-0 kubenswrapper[7784]: I0223 13:05:05.107994 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerStarted","Data":"a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775"} Feb 23 13:05:05.108158 master-0 kubenswrapper[7784]: I0223 13:05:05.108027 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e"} Feb 23 13:05:05.108158 master-0 kubenswrapper[7784]: I0223 13:05:05.108083 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" event={"ID":"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a","Type":"ContainerStarted","Data":"351d6bd3fffc58926870ccb551d0ae06c49eb4c93018c032eafa2e5693b7afa8"} Feb 23 13:05:05.108158 master-0 kubenswrapper[7784]: I0223 13:05:05.108119 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-zr6kv" event={"ID":"18386753-ec74-456d-838d-98c07c169b4b","Type":"ContainerStarted","Data":"d59199d32b4c1cf009022e047df2aaa0825f10711d01d5d7b8e7b3f0e111be5e"} Feb 23 13:05:05.108275 master-0 kubenswrapper[7784]: I0223 13:05:05.108157 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" event={"ID":"d71885db-c29e-429a-aa1f-1c274796a69f","Type":"ContainerStarted","Data":"d0087254289f4af07271aabb28eba273f88d636508f6df7747cf3ddecbb69a2f"} Feb 23 13:05:05.108275 master-0 kubenswrapper[7784]: I0223 13:05:05.108190 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79"} Feb 23 13:05:05.108275 master-0 kubenswrapper[7784]: I0223 13:05:05.108224 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" event={"ID":"7d0a976c-1492-4989-a5ff-e386564dd6ba","Type":"ContainerStarted","Data":"8cde1f19cb810bfc0e23817e9f19677e8c0220a99401e177bd646835a49df130"} Feb 23 13:05:05.108275 master-0 kubenswrapper[7784]: I0223 13:05:05.108256 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" event={"ID":"0d7c1ea0-e3c1-4494-bb27-058200b93ed7","Type":"ContainerStarted","Data":"8e9b49b38707e27425453037c02d2033a4a2c1167fba61cf19ea6b1a408467c1"} Feb 23 13:05:05.108471 master-0 kubenswrapper[7784]: I0223 13:05:05.108288 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerStarted","Data":"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9"} Feb 23 13:05:05.108471 master-0 kubenswrapper[7784]: I0223 13:05:05.108323 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" event={"ID":"3a6b0d84-a344-43e4-b9c4-c8e0670528de","Type":"ContainerStarted","Data":"9c8597195d0409463fdcd6a998dac2cc725d58f59c79bb877a81633d44d4770e"} Feb 23 13:05:05.108471 master-0 kubenswrapper[7784]: I0223 13:05:05.108398 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" event={"ID":"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9","Type":"ContainerStarted","Data":"2c54d10cfb7d1125fe3e2904465852a4bc0618863aeafcc5bcf55a3b6a52c19e"} Feb 23 13:05:05.108471 master-0 kubenswrapper[7784]: I0223 13:05:05.108429 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerStarted","Data":"d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255"} Feb 23 13:05:05.108471 master-0 kubenswrapper[7784]: I0223 13:05:05.108459 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4bad4fd9-074b-4a4e-8af9-50bdc4be09df","Type":"ContainerDied","Data":"89a7eb5dd9ce527b37e8cbbeeed3ebf6bd149269a0623c131d3e7f8e71c4f12f"} Feb 23 13:05:05.108659 master-0 kubenswrapper[7784]: I0223 13:05:05.108487 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a7eb5dd9ce527b37e8cbbeeed3ebf6bd149269a0623c131d3e7f8e71c4f12f" Feb 23 13:05:05.108659 master-0 kubenswrapper[7784]: I0223 13:05:05.108515 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" event={"ID":"3daf0176-92e7-4642-8643-4afbefb77235","Type":"ContainerStarted","Data":"c240b880244e1bda8f990c7ed32e3d0ce7a19f2cf85f5a56c6a2929bc99c8abe"} Feb 23 13:05:05.115025 master-0 kubenswrapper[7784]: I0223 13:05:05.114970 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:05:05.154675 master-0 kubenswrapper[7784]: I0223 13:05:05.154607 7784 scope.go:117] "RemoveContainer" containerID="af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d" Feb 23 13:05:05.198314 master-0 kubenswrapper[7784]: I0223 13:05:05.196824 7784 scope.go:117] "RemoveContainer" containerID="664bed9a58a32d7def57d5398a174d1c1950d8f182a5fd20785e403d394c58a2" Feb 23 13:05:05.256643 master-0 kubenswrapper[7784]: I0223 13:05:05.248928 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:05:05.256643 master-0 kubenswrapper[7784]: I0223 13:05:05.249098 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:05:05.256643 master-0 kubenswrapper[7784]: I0223 13:05:05.254727 7784 scope.go:117] "RemoveContainer" containerID="745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae" Feb 23 13:05:05.273447 master-0 kubenswrapper[7784]: E0223 13:05:05.270169 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae\": container with ID starting with 745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae not found: ID does not exist" containerID="745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae" Feb 23 13:05:05.273447 master-0 kubenswrapper[7784]: I0223 13:05:05.270243 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae"} err="failed to get container status \"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae\": rpc error: code = NotFound desc = could not find container \"745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae\": container with ID starting with 745381825bb1a140e6bacf0a5249d897ffe63f4596e6acaac7940810f86ee3ae not found: ID does not exist" Feb 23 13:05:05.273447 master-0 kubenswrapper[7784]: I0223 13:05:05.270281 7784 scope.go:117] "RemoveContainer" containerID="af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d" Feb 23 13:05:05.273447 master-0 kubenswrapper[7784]: E0223 13:05:05.270953 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d\": container with ID starting with af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d not found: ID does not exist" containerID="af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d" Feb 23 13:05:05.273447 master-0 kubenswrapper[7784]: I0223 13:05:05.270979 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d"} err="failed to get container status \"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d\": rpc error: code = NotFound desc = could not find container \"af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d\": container with ID starting with af0292781aa3f26ff9db6e9aa61240bea908f702a019fc2fad2e932d64c64a9d not found: ID does not exist" Feb 23 13:05:05.327445 master-0 kubenswrapper[7784]: I0223 13:05:05.327375 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:05:05.330469 master-0 kubenswrapper[7784]: I0223 13:05:05.329555 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:05:05.522852 master-0 kubenswrapper[7784]: I0223 13:05:05.522808 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" path="/var/lib/kubelet/pods/797b4e06-e895-4ccc-a8f8-9de5d3a6663f/volumes" Feb 23 13:05:05.523249 master-0 kubenswrapper[7784]: I0223 13:05:05.523231 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" path="/var/lib/kubelet/pods/c09724e9-277a-4fb0-a6c2-8f18ecefad60/volumes" Feb 23 13:05:05.721306 master-0 kubenswrapper[7784]: I0223 13:05:05.721246 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/1.log" Feb 23 13:05:06.008528 master-0 kubenswrapper[7784]: E0223 13:05:06.008324 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:05:06.290317 master-0 kubenswrapper[7784]: I0223 13:05:06.289864 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:08.472565 master-0 kubenswrapper[7784]: I0223 13:05:08.472441 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:05:08.473716 master-0 kubenswrapper[7784]: E0223 13:05:08.472793 7784 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:05:08.473716 master-0 kubenswrapper[7784]: E0223 13:05:08.472919 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert podName:e6f93af9-bdbb-4319-8ddb-e5458e8a9275 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:10.472879824 +0000 UTC m=+373.207733517 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-lqc9w" (UID: "e6f93af9-bdbb-4319-8ddb-e5458e8a9275") : secret "package-server-manager-serving-cert" not found Feb 23 13:05:09.291024 master-0 kubenswrapper[7784]: I0223 13:05:09.290898 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:09.743952 master-0 kubenswrapper[7784]: I0223 13:05:09.743764 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:10.479367 master-0 kubenswrapper[7784]: I0223 13:05:10.477265 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:10.479963 master-0 kubenswrapper[7784]: I0223 13:05:10.479587 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:05:10.778634 master-0 kubenswrapper[7784]: E0223 13:05:10.778586 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:11.836555 master-0 kubenswrapper[7784]: I0223 13:05:11.835827 7784 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-rlbcj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:05:11.836555 master-0 kubenswrapper[7784]: I0223 13:05:11.836472 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:14.743526 master-0 kubenswrapper[7784]: I0223 13:05:14.743403 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:14.785173 master-0 kubenswrapper[7784]: I0223 13:05:14.785009 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:14.808450 master-0 kubenswrapper[7784]: I0223 13:05:14.808322 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:14.841868 master-0 kubenswrapper[7784]: I0223 13:05:14.841753 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=4.841724463 podStartE2EDuration="4.841724463s" podCreationTimestamp="2026-02-23 13:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:14.836398373 +0000 UTC m=+257.571252086" watchObservedRunningTime="2026-02-23 13:05:14.841724463 +0000 UTC m=+257.576578146" Feb 23 13:05:16.296599 master-0 kubenswrapper[7784]: I0223 13:05:16.296507 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:16.305093 master-0 kubenswrapper[7784]: I0223 13:05:16.305049 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:21.835276 master-0 kubenswrapper[7784]: I0223 13:05:21.835118 7784 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-rlbcj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:05:21.836571 master-0 kubenswrapper[7784]: I0223 13:05:21.835237 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" podUID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:23.010109 master-0 kubenswrapper[7784]: E0223 13:05:23.010000 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:05:40.490554 master-0 kubenswrapper[7784]: I0223 13:05:40.490436 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.490829 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.490859 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.490883 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.490897 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.490920 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.490934 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.490957 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.490970 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.490992 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491006 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: E0223 13:05:40.491041 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491056 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491200 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491239 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="797b4e06-e895-4ccc-a8f8-9de5d3a6663f" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491278 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09724e9-277a-4fb0-a6c2-8f18ecefad60" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491302 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491333 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:05:40.491652 master-0 kubenswrapper[7784]: I0223 13:05:40.491393 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:05:40.492752 master-0 kubenswrapper[7784]: I0223 13:05:40.492127 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.495560 master-0 kubenswrapper[7784]: I0223 13:05:40.495484 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-c49mg" Feb 23 13:05:40.495881 master-0 kubenswrapper[7784]: I0223 13:05:40.495805 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 23 13:05:40.553228 master-0 kubenswrapper[7784]: I0223 13:05:40.552343 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:05:40.606928 master-0 kubenswrapper[7784]: I0223 13:05:40.606827 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.607157 master-0 kubenswrapper[7784]: I0223 13:05:40.606947 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.607157 master-0 kubenswrapper[7784]: I0223 13:05:40.607009 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.708371 master-0 kubenswrapper[7784]: I0223 13:05:40.708245 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.708714 master-0 kubenswrapper[7784]: I0223 13:05:40.708386 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.708714 master-0 kubenswrapper[7784]: I0223 13:05:40.708390 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.708714 master-0 kubenswrapper[7784]: I0223 13:05:40.708424 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.708714 master-0 kubenswrapper[7784]: I0223 13:05:40.708488 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.735213 master-0 kubenswrapper[7784]: I0223 13:05:40.735155 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:40.871741 master-0 kubenswrapper[7784]: I0223 13:05:40.871645 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:05:41.324772 master-0 kubenswrapper[7784]: I0223 13:05:41.322823 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:05:41.406834 master-0 kubenswrapper[7784]: I0223 13:05:41.406757 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7"] Feb 23 13:05:41.407445 master-0 kubenswrapper[7784]: I0223 13:05:41.407430 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.413232 master-0 kubenswrapper[7784]: W0223 13:05:41.413180 7784 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-qqvc9": failed to list *v1.Secret: secrets "cloud-credential-operator-dockercfg-qqvc9" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-cloud-credential-operator": no relationship found between node 'master-0' and this object Feb 23 13:05:41.413328 master-0 kubenswrapper[7784]: E0223 13:05:41.413247 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-dockercfg-qqvc9\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cloud-credential-operator-dockercfg-qqvc9\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cloud-credential-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:05:41.413390 master-0 kubenswrapper[7784]: W0223 13:05:41.413366 7784 reflector.go:561] object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cloud-credential-operator": no relationship found between node 'master-0' and this object Feb 23 13:05:41.413423 master-0 kubenswrapper[7784]: E0223 13:05:41.413389 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cloud-credential-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:05:41.413487 master-0 kubenswrapper[7784]: W0223 13:05:41.413454 7784 reflector.go:561] object-"openshift-cloud-credential-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cloud-credential-operator": no relationship found between node 'master-0' and this object Feb 23 13:05:41.413487 master-0 kubenswrapper[7784]: E0223 13:05:41.413479 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cloud-credential-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:05:41.413582 master-0 kubenswrapper[7784]: W0223 13:05:41.413555 7784 reflector.go:561] object-"openshift-cloud-credential-operator"/"cco-trusted-ca": failed to list *v1.ConfigMap: configmaps "cco-trusted-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cloud-credential-operator": no relationship found between node 'master-0' and this object Feb 23 13:05:41.413618 master-0 kubenswrapper[7784]: E0223 13:05:41.413582 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cco-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cco-trusted-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cloud-credential-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:05:41.413664 master-0 kubenswrapper[7784]: W0223 13:05:41.413644 7784 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert": failed to list *v1.Secret: secrets "cloud-credential-operator-serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-cloud-credential-operator": no relationship found between node 'master-0' and this object Feb 23 13:05:41.413699 master-0 kubenswrapper[7784]: E0223 13:05:41.413678 7784 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cloud-credential-operator-serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cloud-credential-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:05:41.426400 master-0 kubenswrapper[7784]: I0223 13:05:41.424377 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7"] Feb 23 13:05:41.475154 master-0 kubenswrapper[7784]: I0223 13:05:41.475096 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr"] Feb 23 13:05:41.475788 master-0 kubenswrapper[7784]: I0223 13:05:41.475760 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.479903 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.480031 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nhhd2" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.480099 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.480217 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.480456 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:05:41.482633 master-0 kubenswrapper[7784]: I0223 13:05:41.480671 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:05:41.500054 master-0 kubenswrapper[7784]: I0223 13:05:41.496155 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv"] Feb 23 13:05:41.500054 master-0 kubenswrapper[7784]: I0223 13:05:41.496993 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.500054 master-0 kubenswrapper[7784]: I0223 13:05:41.499824 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zwkpp" Feb 23 13:05:41.500054 master-0 kubenswrapper[7784]: I0223 13:05:41.499999 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:05:41.500709 master-0 kubenswrapper[7784]: I0223 13:05:41.500129 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:05:41.500709 master-0 kubenswrapper[7784]: I0223 13:05:41.500279 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:05:41.505146 master-0 kubenswrapper[7784]: I0223 13:05:41.503220 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6"] Feb 23 13:05:41.505146 master-0 kubenswrapper[7784]: I0223 13:05:41.504083 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.505304 master-0 kubenswrapper[7784]: I0223 13:05:41.505223 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:05:41.505845 master-0 kubenswrapper[7784]: I0223 13:05:41.505671 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:05:41.505953 master-0 kubenswrapper[7784]: I0223 13:05:41.505928 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:05:41.505953 master-0 kubenswrapper[7784]: I0223 13:05:41.505934 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-v42tl" Feb 23 13:05:41.513387 master-0 kubenswrapper[7784]: I0223 13:05:41.512644 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh"] Feb 23 13:05:41.513387 master-0 kubenswrapper[7784]: I0223 13:05:41.513231 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.513575 master-0 kubenswrapper[7784]: I0223 13:05:41.513522 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb"] Feb 23 13:05:41.529731 master-0 kubenswrapper[7784]: I0223 13:05:41.529677 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:05:41.530702 master-0 kubenswrapper[7784]: I0223 13:05:41.530662 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6tkzc" Feb 23 13:05:41.531152 master-0 kubenswrapper[7784]: I0223 13:05:41.531109 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.531189 master-0 kubenswrapper[7784]: I0223 13:05:41.531164 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.531221 master-0 kubenswrapper[7784]: I0223 13:05:41.531192 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.532194 master-0 kubenswrapper[7784]: I0223 13:05:41.532162 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.535643 master-0 kubenswrapper[7784]: I0223 13:05:41.535432 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4whws" Feb 23 13:05:41.535643 master-0 kubenswrapper[7784]: I0223 13:05:41.535635 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:05:41.535762 master-0 kubenswrapper[7784]: I0223 13:05:41.535744 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:05:41.536040 master-0 kubenswrapper[7784]: I0223 13:05:41.536013 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:05:41.536135 master-0 kubenswrapper[7784]: I0223 13:05:41.536102 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:05:41.536230 master-0 kubenswrapper[7784]: I0223 13:05:41.536208 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:05:41.547974 master-0 kubenswrapper[7784]: I0223 13:05:41.547920 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv"] Feb 23 13:05:41.547974 master-0 kubenswrapper[7784]: I0223 13:05:41.547961 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6"] Feb 23 13:05:41.550694 master-0 kubenswrapper[7784]: I0223 13:05:41.550667 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh"] Feb 23 13:05:41.575848 master-0 kubenswrapper[7784]: I0223 13:05:41.575398 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th"] Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.576966 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.579405 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-46ht7" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.579644 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.579723 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.579929 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.580005 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.581440 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn"] Feb 23 13:05:41.582938 master-0 kubenswrapper[7784]: I0223 13:05:41.582480 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.583366 master-0 kubenswrapper[7784]: I0223 13:05:41.583218 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:05:41.583650 master-0 kubenswrapper[7784]: I0223 13:05:41.583620 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:05:41.584134 master-0 kubenswrapper[7784]: I0223 13:05:41.584108 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8d884" Feb 23 13:05:41.585514 master-0 kubenswrapper[7784]: I0223 13:05:41.585035 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-sswng"] Feb 23 13:05:41.585587 master-0 kubenswrapper[7784]: I0223 13:05:41.585527 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.587327 master-0 kubenswrapper[7784]: I0223 13:05:41.587269 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:05:41.587776 master-0 kubenswrapper[7784]: I0223 13:05:41.587607 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:05:41.589824 master-0 kubenswrapper[7784]: I0223 13:05:41.587919 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg"] Feb 23 13:05:41.589824 master-0 kubenswrapper[7784]: I0223 13:05:41.588924 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2ttqf" Feb 23 13:05:41.589824 master-0 kubenswrapper[7784]: I0223 13:05:41.589142 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.590196 master-0 kubenswrapper[7784]: I0223 13:05:41.589996 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:05:41.590270 master-0 kubenswrapper[7784]: I0223 13:05:41.590212 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:05:41.590330 master-0 kubenswrapper[7784]: I0223 13:05:41.590292 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:05:41.590414 master-0 kubenswrapper[7784]: I0223 13:05:41.590376 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th"] Feb 23 13:05:41.592534 master-0 kubenswrapper[7784]: I0223 13:05:41.592084 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-nm845"] Feb 23 13:05:41.592534 master-0 kubenswrapper[7784]: I0223 13:05:41.592484 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:05:41.592534 master-0 kubenswrapper[7784]: I0223 13:05:41.592507 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:05:41.592706 master-0 kubenswrapper[7784]: I0223 13:05:41.592685 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:05:41.592779 master-0 kubenswrapper[7784]: I0223 13:05:41.592757 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xfzk8" Feb 23 13:05:41.592890 master-0 kubenswrapper[7784]: I0223 13:05:41.592871 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:05:41.594500 master-0 kubenswrapper[7784]: I0223 13:05:41.593013 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.598611 master-0 kubenswrapper[7784]: I0223 13:05:41.596380 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:05:41.598611 master-0 kubenswrapper[7784]: I0223 13:05:41.597750 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:05:41.598611 master-0 kubenswrapper[7784]: I0223 13:05:41.597862 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:05:41.598611 master-0 kubenswrapper[7784]: I0223 13:05:41.597974 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:05:41.598611 master-0 kubenswrapper[7784]: I0223 13:05:41.598082 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-dsztp" Feb 23 13:05:41.603321 master-0 kubenswrapper[7784]: I0223 13:05:41.601938 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn"] Feb 23 13:05:41.607167 master-0 kubenswrapper[7784]: I0223 13:05:41.607097 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-nm845"] Feb 23 13:05:41.609047 master-0 kubenswrapper[7784]: I0223 13:05:41.609020 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg"] Feb 23 13:05:41.623872 master-0 kubenswrapper[7784]: I0223 13:05:41.623763 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-sswng"] Feb 23 13:05:41.631950 master-0 kubenswrapper[7784]: I0223 13:05:41.631891 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdj2\" (UniqueName: \"kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.631950 master-0 kubenswrapper[7784]: I0223 13:05:41.631947 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.631978 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.631998 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzzv\" (UniqueName: \"kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632021 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632040 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632064 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632083 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632104 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632126 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.632164 master-0 kubenswrapper[7784]: I0223 13:05:41.632168 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.632458 master-0 kubenswrapper[7784]: I0223 13:05:41.632188 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjkd\" (UniqueName: \"kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.632458 master-0 kubenswrapper[7784]: I0223 13:05:41.632278 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z8xh\" (UniqueName: \"kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.632554 master-0 kubenswrapper[7784]: I0223 13:05:41.632511 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.632706 master-0 kubenswrapper[7784]: I0223 13:05:41.632677 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.632779 master-0 kubenswrapper[7784]: I0223 13:05:41.632751 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.632814 master-0 kubenswrapper[7784]: I0223 13:05:41.632786 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.632855 master-0 kubenswrapper[7784]: I0223 13:05:41.632843 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbhv\" (UniqueName: \"kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.633052 master-0 kubenswrapper[7784]: I0223 13:05:41.632985 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.691853 master-0 kubenswrapper[7784]: I0223 13:05:41.691808 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6"] Feb 23 13:05:41.693074 master-0 kubenswrapper[7784]: I0223 13:05:41.693041 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.694918 master-0 kubenswrapper[7784]: I0223 13:05:41.694900 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-2hz68" Feb 23 13:05:41.696792 master-0 kubenswrapper[7784]: I0223 13:05:41.696770 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:05:41.697311 master-0 kubenswrapper[7784]: I0223 13:05:41.697288 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:05:41.697456 master-0 kubenswrapper[7784]: I0223 13:05:41.697435 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng"] Feb 23 13:05:41.700289 master-0 kubenswrapper[7784]: I0223 13:05:41.700258 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6"] Feb 23 13:05:41.700382 master-0 kubenswrapper[7784]: I0223 13:05:41.700367 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.701702 master-0 kubenswrapper[7784]: I0223 13:05:41.701668 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:05:41.702422 master-0 kubenswrapper[7784]: I0223 13:05:41.702389 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng"] Feb 23 13:05:41.733526 master-0 kubenswrapper[7784]: I0223 13:05:41.733471 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733530 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733561 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq2x\" (UniqueName: \"kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733586 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733608 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733637 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjkd\" (UniqueName: \"kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.733706 master-0 kubenswrapper[7784]: I0223 13:05:41.733664 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.733876 master-0 kubenswrapper[7784]: I0223 13:05:41.733749 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdbct\" (UniqueName: \"kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.734211 master-0 kubenswrapper[7784]: I0223 13:05:41.734169 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z8xh\" (UniqueName: \"kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.734293 master-0 kubenswrapper[7784]: I0223 13:05:41.734263 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.734712 master-0 kubenswrapper[7784]: I0223 13:05:41.734296 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.734712 master-0 kubenswrapper[7784]: I0223 13:05:41.734348 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.734712 master-0 kubenswrapper[7784]: I0223 13:05:41.734365 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.734712 master-0 kubenswrapper[7784]: I0223 13:05:41.734481 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.735447 master-0 kubenswrapper[7784]: I0223 13:05:41.734795 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.735447 master-0 kubenswrapper[7784]: I0223 13:05:41.734838 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.735447 master-0 kubenswrapper[7784]: I0223 13:05:41.735252 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.735554 master-0 kubenswrapper[7784]: I0223 13:05:41.735472 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.735656 master-0 kubenswrapper[7784]: I0223 13:05:41.735638 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.735740 master-0 kubenswrapper[7784]: I0223 13:05:41.735723 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbhv\" (UniqueName: \"kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.735822 master-0 kubenswrapper[7784]: I0223 13:05:41.735807 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.735911 master-0 kubenswrapper[7784]: I0223 13:05:41.735897 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.735988 master-0 kubenswrapper[7784]: I0223 13:05:41.735976 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.736068 master-0 kubenswrapper[7784]: I0223 13:05:41.736054 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.736143 master-0 kubenswrapper[7784]: I0223 13:05:41.736129 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.736283 master-0 kubenswrapper[7784]: I0223 13:05:41.736270 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.736381 master-0 kubenswrapper[7784]: I0223 13:05:41.736367 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.736509 master-0 kubenswrapper[7784]: I0223 13:05:41.736494 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.736615 master-0 kubenswrapper[7784]: I0223 13:05:41.736601 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdj2\" (UniqueName: \"kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.736723 master-0 kubenswrapper[7784]: I0223 13:05:41.736707 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.736819 master-0 kubenswrapper[7784]: I0223 13:05:41.736807 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sh26\" (UniqueName: \"kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.738170 master-0 kubenswrapper[7784]: I0223 13:05:41.738152 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.738265 master-0 kubenswrapper[7784]: I0223 13:05:41.738250 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znzzv\" (UniqueName: \"kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.738376 master-0 kubenswrapper[7784]: I0223 13:05:41.738358 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn9mt\" (UniqueName: \"kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.738468 master-0 kubenswrapper[7784]: I0223 13:05:41.738455 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.738593 master-0 kubenswrapper[7784]: I0223 13:05:41.738578 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.737920 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.737065 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.737694 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.739298 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.739951 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.739988 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.741327 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.747098 master-0 kubenswrapper[7784]: I0223 13:05:41.742158 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.747446 master-0 kubenswrapper[7784]: I0223 13:05:41.747428 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.747546 master-0 kubenswrapper[7784]: I0223 13:05:41.747531 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7b4r\" (UniqueName: \"kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.747656 master-0 kubenswrapper[7784]: I0223 13:05:41.747625 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.748029 master-0 kubenswrapper[7784]: I0223 13:05:41.747989 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.750908 master-0 kubenswrapper[7784]: I0223 13:05:41.750874 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjkd\" (UniqueName: \"kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd\") pod \"machine-approver-798b897698-fhsgr\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.751993 master-0 kubenswrapper[7784]: I0223 13:05:41.751960 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbhv\" (UniqueName: \"kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.752505 master-0 kubenswrapper[7784]: I0223 13:05:41.752470 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z8xh\" (UniqueName: \"kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.753174 master-0 kubenswrapper[7784]: I0223 13:05:41.753138 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdj2\" (UniqueName: \"kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.757350 master-0 kubenswrapper[7784]: I0223 13:05:41.756977 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znzzv\" (UniqueName: \"kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.812847 master-0 kubenswrapper[7784]: I0223 13:05:41.812794 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:41.827241 master-0 kubenswrapper[7784]: W0223 13:05:41.827133 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21541c8c_3d6b_4af1_a03a_d899cebb9c26.slice/crio-884e5b0dfd1e1d79bfbe276b1a0f2dac6f4b84154b57ab496bf6e8d3c5c08e2b WatchSource:0}: Error finding container 884e5b0dfd1e1d79bfbe276b1a0f2dac6f4b84154b57ab496bf6e8d3c5c08e2b: Status 404 returned error can't find the container with id 884e5b0dfd1e1d79bfbe276b1a0f2dac6f4b84154b57ab496bf6e8d3c5c08e2b Feb 23 13:05:41.849395 master-0 kubenswrapper[7784]: I0223 13:05:41.849354 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdbct\" (UniqueName: \"kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.849710 master-0 kubenswrapper[7784]: I0223 13:05:41.849685 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.849775 master-0 kubenswrapper[7784]: I0223 13:05:41.849712 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.850002 master-0 kubenswrapper[7784]: I0223 13:05:41.849974 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjcw\" (UniqueName: \"kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.850114 master-0 kubenswrapper[7784]: I0223 13:05:41.850097 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.850235 master-0 kubenswrapper[7784]: I0223 13:05:41.850216 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.850364 master-0 kubenswrapper[7784]: I0223 13:05:41.850324 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.850479 master-0 kubenswrapper[7784]: I0223 13:05:41.850459 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ght2z\" (UniqueName: \"kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.850581 master-0 kubenswrapper[7784]: I0223 13:05:41.850565 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.850754 master-0 kubenswrapper[7784]: I0223 13:05:41.850732 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.850874 master-0 kubenswrapper[7784]: I0223 13:05:41.850853 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.850970 master-0 kubenswrapper[7784]: I0223 13:05:41.850953 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.851072 master-0 kubenswrapper[7784]: I0223 13:05:41.851055 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.851168 master-0 kubenswrapper[7784]: I0223 13:05:41.851153 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.851269 master-0 kubenswrapper[7784]: I0223 13:05:41.851253 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.851390 master-0 kubenswrapper[7784]: I0223 13:05:41.851373 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.851499 master-0 kubenswrapper[7784]: I0223 13:05:41.851482 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sh26\" (UniqueName: \"kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.851583 master-0 kubenswrapper[7784]: I0223 13:05:41.850763 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.851668 master-0 kubenswrapper[7784]: I0223 13:05:41.851278 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.851764 master-0 kubenswrapper[7784]: I0223 13:05:41.851746 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.851876 master-0 kubenswrapper[7784]: I0223 13:05:41.851854 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn9mt\" (UniqueName: \"kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.851991 master-0 kubenswrapper[7784]: I0223 13:05:41.851972 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.852123 master-0 kubenswrapper[7784]: I0223 13:05:41.852102 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.852236 master-0 kubenswrapper[7784]: I0223 13:05:41.852219 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7b4r\" (UniqueName: \"kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.852380 master-0 kubenswrapper[7784]: I0223 13:05:41.852354 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.852446 master-0 kubenswrapper[7784]: I0223 13:05:41.851986 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.852446 master-0 kubenswrapper[7784]: I0223 13:05:41.852261 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.852728 master-0 kubenswrapper[7784]: I0223 13:05:41.852695 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.852856 master-0 kubenswrapper[7784]: I0223 13:05:41.852828 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.852969 master-0 kubenswrapper[7784]: I0223 13:05:41.852341 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.853051 master-0 kubenswrapper[7784]: I0223 13:05:41.852357 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.853051 master-0 kubenswrapper[7784]: I0223 13:05:41.852997 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq2x\" (UniqueName: \"kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.853146 master-0 kubenswrapper[7784]: I0223 13:05:41.853072 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.853146 master-0 kubenswrapper[7784]: I0223 13:05:41.853106 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.853146 master-0 kubenswrapper[7784]: I0223 13:05:41.853128 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.853929 master-0 kubenswrapper[7784]: I0223 13:05:41.853785 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.854938 master-0 kubenswrapper[7784]: I0223 13:05:41.854878 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.855328 master-0 kubenswrapper[7784]: I0223 13:05:41.852225 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.855472 master-0 kubenswrapper[7784]: I0223 13:05:41.855450 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.855966 master-0 kubenswrapper[7784]: I0223 13:05:41.855924 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.856568 master-0 kubenswrapper[7784]: I0223 13:05:41.856528 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.857784 master-0 kubenswrapper[7784]: I0223 13:05:41.857738 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.866594 master-0 kubenswrapper[7784]: I0223 13:05:41.863947 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.866594 master-0 kubenswrapper[7784]: I0223 13:05:41.864621 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:05:41.867528 master-0 kubenswrapper[7784]: I0223 13:05:41.867014 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdbct\" (UniqueName: \"kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.867528 master-0 kubenswrapper[7784]: I0223 13:05:41.867234 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn9mt\" (UniqueName: \"kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.868333 master-0 kubenswrapper[7784]: I0223 13:05:41.868299 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sh26\" (UniqueName: \"kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:41.871242 master-0 kubenswrapper[7784]: I0223 13:05:41.871212 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq2x\" (UniqueName: \"kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:41.871862 master-0 kubenswrapper[7784]: I0223 13:05:41.871832 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7b4r\" (UniqueName: \"kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.905449 master-0 kubenswrapper[7784]: I0223 13:05:41.905399 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:41.922517 master-0 kubenswrapper[7784]: I0223 13:05:41.922419 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:05:41.932526 master-0 kubenswrapper[7784]: I0223 13:05:41.932472 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:41.946040 master-0 kubenswrapper[7784]: I0223 13:05:41.945982 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:05:41.950900 master-0 kubenswrapper[7784]: W0223 13:05:41.950863 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd537759_1528_465f_a3bb_e56fbf4cee74.slice/crio-477048cf7928248cd8970baea6983da04450948ff32b9ff1904c0c54deeb1651 WatchSource:0}: Error finding container 477048cf7928248cd8970baea6983da04450948ff32b9ff1904c0c54deeb1651: Status 404 returned error can't find the container with id 477048cf7928248cd8970baea6983da04450948ff32b9ff1904c0c54deeb1651 Feb 23 13:05:41.954085 master-0 kubenswrapper[7784]: I0223 13:05:41.954050 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.955064 master-0 kubenswrapper[7784]: I0223 13:05:41.954251 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.955064 master-0 kubenswrapper[7784]: I0223 13:05:41.954326 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.955064 master-0 kubenswrapper[7784]: I0223 13:05:41.954381 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjcw\" (UniqueName: \"kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.955064 master-0 kubenswrapper[7784]: I0223 13:05:41.954405 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght2z\" (UniqueName: \"kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.955064 master-0 kubenswrapper[7784]: I0223 13:05:41.954436 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.957548 master-0 kubenswrapper[7784]: I0223 13:05:41.957504 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.959654 master-0 kubenswrapper[7784]: I0223 13:05:41.959608 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.961114 master-0 kubenswrapper[7784]: I0223 13:05:41.961069 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.965947 master-0 kubenswrapper[7784]: I0223 13:05:41.965698 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:05:41.969105 master-0 kubenswrapper[7784]: I0223 13:05:41.969073 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.978609 master-0 kubenswrapper[7784]: I0223 13:05:41.978545 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjcw\" (UniqueName: \"kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:41.984773 master-0 kubenswrapper[7784]: I0223 13:05:41.984668 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerStarted","Data":"477048cf7928248cd8970baea6983da04450948ff32b9ff1904c0c54deeb1651"} Feb 23 13:05:41.991045 master-0 kubenswrapper[7784]: I0223 13:05:41.990977 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerStarted","Data":"884e5b0dfd1e1d79bfbe276b1a0f2dac6f4b84154b57ab496bf6e8d3c5c08e2b"} Feb 23 13:05:41.991150 master-0 kubenswrapper[7784]: I0223 13:05:41.991072 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght2z\" (UniqueName: \"kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:41.992649 master-0 kubenswrapper[7784]: I0223 13:05:41.992322 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:05:41.995448 master-0 kubenswrapper[7784]: I0223 13:05:41.994775 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449","Type":"ContainerStarted","Data":"6cd0275f1f7db307e43c09e5b7b938a05a638192648b348b83255e2e4d8e9eb8"} Feb 23 13:05:41.995448 master-0 kubenswrapper[7784]: I0223 13:05:41.994803 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449","Type":"ContainerStarted","Data":"109b623b5a1ea0fcc0a5a5fd7d747c9ee8a3d9d901c40db77e82589e69041e94"} Feb 23 13:05:42.015013 master-0 kubenswrapper[7784]: I0223 13:05:42.010783 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:05:42.030280 master-0 kubenswrapper[7784]: I0223 13:05:42.030247 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:05:42.079212 master-0 kubenswrapper[7784]: I0223 13:05:42.078675 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:42.098872 master-0 kubenswrapper[7784]: I0223 13:05:42.098130 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:42.308545 master-0 kubenswrapper[7784]: I0223 13:05:42.304944 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:05:42.343700 master-0 kubenswrapper[7784]: I0223 13:05:42.342232 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.342213047 podStartE2EDuration="2.342213047s" podCreationTimestamp="2026-02-23 13:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:42.023067818 +0000 UTC m=+284.757921461" watchObservedRunningTime="2026-02-23 13:05:42.342213047 +0000 UTC m=+285.077066690" Feb 23 13:05:42.345811 master-0 kubenswrapper[7784]: I0223 13:05:42.345397 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv"] Feb 23 13:05:42.387720 master-0 kubenswrapper[7784]: I0223 13:05:42.387595 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6"] Feb 23 13:05:42.387929 master-0 kubenswrapper[7784]: W0223 13:05:42.387719 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a694bb_fe3e_4478_bbb4_d2be9cd4c57f.slice/crio-b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7 WatchSource:0}: Error finding container b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7: Status 404 returned error can't find the container with id b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7 Feb 23 13:05:42.486669 master-0 kubenswrapper[7784]: I0223 13:05:42.486592 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th"] Feb 23 13:05:42.499772 master-0 kubenswrapper[7784]: I0223 13:05:42.499724 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh"] Feb 23 13:05:42.500389 master-0 kubenswrapper[7784]: W0223 13:05:42.500288 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda663ecaf_ced2_4c7d_91c8_44e94851f7d6.slice/crio-a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560 WatchSource:0}: Error finding container a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560: Status 404 returned error can't find the container with id a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560 Feb 23 13:05:42.505800 master-0 kubenswrapper[7784]: W0223 13:05:42.505761 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffc63b7_db45_429d_82ff_e50f6aae51dc.slice/crio-190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192 WatchSource:0}: Error finding container 190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192: Status 404 returned error can't find the container with id 190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192 Feb 23 13:05:42.526577 master-0 kubenswrapper[7784]: I0223 13:05:42.526313 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:05:42.534088 master-0 kubenswrapper[7784]: I0223 13:05:42.534060 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:42.613303 master-0 kubenswrapper[7784]: I0223 13:05:42.611661 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-sswng"] Feb 23 13:05:42.613303 master-0 kubenswrapper[7784]: I0223 13:05:42.612429 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn"] Feb 23 13:05:42.614311 master-0 kubenswrapper[7784]: I0223 13:05:42.614270 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:05:42.614919 master-0 kubenswrapper[7784]: I0223 13:05:42.614882 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg"] Feb 23 13:05:42.616387 master-0 kubenswrapper[7784]: W0223 13:05:42.616272 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf57b864_25d7_4420_9052_04dd580a9f7d.slice/crio-6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4 WatchSource:0}: Error finding container 6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4: Status 404 returned error can't find the container with id 6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4 Feb 23 13:05:42.617289 master-0 kubenswrapper[7784]: I0223 13:05:42.617248 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-nm845"] Feb 23 13:05:42.617846 master-0 kubenswrapper[7784]: W0223 13:05:42.617813 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ede583b_44b0_42af_92c9_f7b8938f7843.slice/crio-541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00 WatchSource:0}: Error finding container 541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00: Status 404 returned error can't find the container with id 541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00 Feb 23 13:05:42.619078 master-0 kubenswrapper[7784]: W0223 13:05:42.619031 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce55de54_8441_4a16_8b57_598042869000.slice/crio-78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6 WatchSource:0}: Error finding container 78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6: Status 404 returned error can't find the container with id 78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6 Feb 23 13:05:42.621294 master-0 kubenswrapper[7784]: W0223 13:05:42.621235 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47dedc5d_1288_4020_b481_5dca68a7d437.slice/crio-e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29 WatchSource:0}: Error finding container e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29: Status 404 returned error can't find the container with id e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29 Feb 23 13:05:42.632036 master-0 kubenswrapper[7784]: I0223 13:05:42.628197 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:42.648614 master-0 kubenswrapper[7784]: E0223 13:05:42.647170 7784 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:05:42.648614 master-0 kubenswrapper[7784]: E0223 13:05:42.647217 7784 projected.go:194] Error preparing data for projected volume kube-api-access-wbk8g for pod openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:05:42.648614 master-0 kubenswrapper[7784]: E0223 13:05:42.647284 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g podName:945907dd-f6b3-400f-b539-e1310eb11dd7 nodeName:}" failed. No retries permitted until 2026-02-23 13:05:43.147261822 +0000 UTC m=+285.882115465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wbk8g" (UniqueName: "kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g") pod "cloud-credential-operator-6968c58f46-87hx7" (UID: "945907dd-f6b3-400f-b539-e1310eb11dd7") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:05:42.853715 master-0 kubenswrapper[7784]: I0223 13:05:42.853656 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng"] Feb 23 13:05:42.856093 master-0 kubenswrapper[7784]: I0223 13:05:42.855818 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6"] Feb 23 13:05:42.882136 master-0 kubenswrapper[7784]: I0223 13:05:42.881953 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-qqvc9" Feb 23 13:05:42.946402 master-0 kubenswrapper[7784]: I0223 13:05:42.946352 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:05:43.005440 master-0 kubenswrapper[7784]: I0223 13:05:43.005381 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" event={"ID":"affc63b7-db45-429d-82ff-e50f6aae51dc","Type":"ContainerStarted","Data":"190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192"} Feb 23 13:05:43.006756 master-0 kubenswrapper[7784]: I0223 13:05:43.006723 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" event={"ID":"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6","Type":"ContainerStarted","Data":"6dc6e354bc34576e2c51f7938beb3db18ad7bf25caa74761e84175165545f5f8"} Feb 23 13:05:43.008707 master-0 kubenswrapper[7784]: I0223 13:05:43.008614 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" event={"ID":"a663ecaf-ced2-4c7d-91c8-44e94851f7d6","Type":"ContainerStarted","Data":"076ad53cf94a922f9cbef8abf7fd0c513533139eb8430c987cee3e2a4eacf6bb"} Feb 23 13:05:43.008707 master-0 kubenswrapper[7784]: I0223 13:05:43.008639 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" event={"ID":"a663ecaf-ced2-4c7d-91c8-44e94851f7d6","Type":"ContainerStarted","Data":"26a1186ff59907fd2f96cc97b54c6ac88a7c2c4d965d9c749c34381a74f361a9"} Feb 23 13:05:43.008707 master-0 kubenswrapper[7784]: I0223 13:05:43.008649 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" event={"ID":"a663ecaf-ced2-4c7d-91c8-44e94851f7d6","Type":"ContainerStarted","Data":"a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560"} Feb 23 13:05:43.015704 master-0 kubenswrapper[7784]: I0223 13:05:43.015609 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" event={"ID":"898e6c96-73d5-4dc5-a383-986599a5bcd9","Type":"ContainerStarted","Data":"3a2d420757c83bca1045a3ec4516092ddbbd8abbf4f20e54b4c522c5d6328b82"} Feb 23 13:05:43.018549 master-0 kubenswrapper[7784]: I0223 13:05:43.018476 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" event={"ID":"47dedc5d-1288-4020-b481-5dca68a7d437","Type":"ContainerStarted","Data":"8a8b29460278beaf5945bae2f91a902eccc7463d9b39e61ee345d261da3333f5"} Feb 23 13:05:43.018549 master-0 kubenswrapper[7784]: I0223 13:05:43.018507 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" event={"ID":"47dedc5d-1288-4020-b481-5dca68a7d437","Type":"ContainerStarted","Data":"e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29"} Feb 23 13:05:43.020005 master-0 kubenswrapper[7784]: I0223 13:05:43.019967 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" event={"ID":"bf57b864-25d7-4420-9052-04dd580a9f7d","Type":"ContainerStarted","Data":"2ad0590ec22345937c5fa00808d2a88de234131f5824509043fa060e48c4455b"} Feb 23 13:05:43.020005 master-0 kubenswrapper[7784]: I0223 13:05:43.019996 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" event={"ID":"bf57b864-25d7-4420-9052-04dd580a9f7d","Type":"ContainerStarted","Data":"6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4"} Feb 23 13:05:43.022833 master-0 kubenswrapper[7784]: I0223 13:05:43.021382 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" event={"ID":"7cadeb05-9298-4bcf-b6f2-659c68eba020","Type":"ContainerStarted","Data":"e8887c7d6eee650b037c513d33ece3c0abae0325c7cfbd8aa521e15955d8540b"} Feb 23 13:05:43.022833 master-0 kubenswrapper[7784]: I0223 13:05:43.022604 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-sswng" event={"ID":"ce55de54-8441-4a16-8b57-598042869000","Type":"ContainerStarted","Data":"78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6"} Feb 23 13:05:43.023887 master-0 kubenswrapper[7784]: I0223 13:05:43.023832 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" event={"ID":"5ede583b-44b0-42af-92c9-f7b8938f7843","Type":"ContainerStarted","Data":"541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00"} Feb 23 13:05:43.025220 master-0 kubenswrapper[7784]: I0223 13:05:43.025176 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerStarted","Data":"b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7"} Feb 23 13:05:43.031616 master-0 kubenswrapper[7784]: I0223 13:05:43.031570 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerStarted","Data":"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af"} Feb 23 13:05:43.032307 master-0 kubenswrapper[7784]: I0223 13:05:43.032249 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" podStartSLOduration=2.03223372 podStartE2EDuration="2.03223372s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:43.03019054 +0000 UTC m=+285.765044183" watchObservedRunningTime="2026-02-23 13:05:43.03223372 +0000 UTC m=+285.767087363" Feb 23 13:05:43.177478 master-0 kubenswrapper[7784]: I0223 13:05:43.177319 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:43.201885 master-0 kubenswrapper[7784]: I0223 13:05:43.201829 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:43.245577 master-0 kubenswrapper[7784]: I0223 13:05:43.245515 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:05:43.718908 master-0 kubenswrapper[7784]: I0223 13:05:43.718850 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7"] Feb 23 13:05:44.744889 master-0 kubenswrapper[7784]: W0223 13:05:44.744831 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod945907dd_f6b3_400f_b539_e1310eb11dd7.slice/crio-28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7 WatchSource:0}: Error finding container 28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7: Status 404 returned error can't find the container with id 28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7 Feb 23 13:05:45.043877 master-0 kubenswrapper[7784]: I0223 13:05:45.043825 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" event={"ID":"945907dd-f6b3-400f-b539-e1310eb11dd7","Type":"ContainerStarted","Data":"28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7"} Feb 23 13:05:46.141983 master-0 kubenswrapper[7784]: I0223 13:05:46.141918 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q8bjq"] Feb 23 13:05:46.144065 master-0 kubenswrapper[7784]: I0223 13:05:46.144046 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.150012 master-0 kubenswrapper[7784]: I0223 13:05:46.149432 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:05:46.150315 master-0 kubenswrapper[7784]: I0223 13:05:46.150299 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vqpkz" Feb 23 13:05:46.227187 master-0 kubenswrapper[7784]: I0223 13:05:46.226929 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.227187 master-0 kubenswrapper[7784]: I0223 13:05:46.226989 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2gm\" (UniqueName: \"kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.227187 master-0 kubenswrapper[7784]: I0223 13:05:46.227030 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.227571 master-0 kubenswrapper[7784]: I0223 13:05:46.227487 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.329086 master-0 kubenswrapper[7784]: I0223 13:05:46.329003 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.329086 master-0 kubenswrapper[7784]: I0223 13:05:46.329072 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.329086 master-0 kubenswrapper[7784]: I0223 13:05:46.329093 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2gm\" (UniqueName: \"kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.329576 master-0 kubenswrapper[7784]: I0223 13:05:46.329154 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.331268 master-0 kubenswrapper[7784]: I0223 13:05:46.329886 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.331268 master-0 kubenswrapper[7784]: I0223 13:05:46.331193 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.333422 master-0 kubenswrapper[7784]: I0223 13:05:46.333376 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.362821 master-0 kubenswrapper[7784]: I0223 13:05:46.362745 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2gm\" (UniqueName: \"kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:46.466628 master-0 kubenswrapper[7784]: I0223 13:05:46.466544 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:05:49.445333 master-0 kubenswrapper[7784]: I0223 13:05:49.445124 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr"] Feb 23 13:05:53.730303 master-0 kubenswrapper[7784]: W0223 13:05:53.730167 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57803492_e1dd_4994_8330_1e9b393d54fd.slice/crio-7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3 WatchSource:0}: Error finding container 7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3: Status 404 returned error can't find the container with id 7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3 Feb 23 13:05:54.110878 master-0 kubenswrapper[7784]: I0223 13:05:54.110013 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" event={"ID":"945907dd-f6b3-400f-b539-e1310eb11dd7","Type":"ContainerStarted","Data":"646ec694e0d4bef89c3c4df0024ffd40c6e0d4986e4e55f49aa6e070583bed90"} Feb 23 13:05:54.148328 master-0 kubenswrapper[7784]: I0223 13:05:54.136115 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" event={"ID":"57803492-e1dd-4994-8330-1e9b393d54fd","Type":"ContainerStarted","Data":"7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3"} Feb 23 13:05:54.156014 master-0 kubenswrapper[7784]: I0223 13:05:54.155870 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" podStartSLOduration=1.9505812040000001 podStartE2EDuration="13.155847239s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.512264893 +0000 UTC m=+285.247118536" lastFinishedPulling="2026-02-23 13:05:53.717530908 +0000 UTC m=+296.452384571" observedRunningTime="2026-02-23 13:05:54.153584005 +0000 UTC m=+296.888437658" watchObservedRunningTime="2026-02-23 13:05:54.155847239 +0000 UTC m=+296.890700882" Feb 23 13:05:54.183717 master-0 kubenswrapper[7784]: I0223 13:05:54.183629 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" podStartSLOduration=2.14028154 podStartE2EDuration="13.183603348s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.760082449 +0000 UTC m=+285.494936102" lastFinishedPulling="2026-02-23 13:05:53.803404177 +0000 UTC m=+296.538257910" observedRunningTime="2026-02-23 13:05:54.177796266 +0000 UTC m=+296.912649929" watchObservedRunningTime="2026-02-23 13:05:54.183603348 +0000 UTC m=+296.918456991" Feb 23 13:05:55.151951 master-0 kubenswrapper[7784]: I0223 13:05:55.151172 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" event={"ID":"5ede583b-44b0-42af-92c9-f7b8938f7843","Type":"ContainerStarted","Data":"9b6793307745f6a85fc70df6b4de715b7748d6182b66009e926d2209513a5af3"} Feb 23 13:05:55.151951 master-0 kubenswrapper[7784]: I0223 13:05:55.151222 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" event={"ID":"5ede583b-44b0-42af-92c9-f7b8938f7843","Type":"ContainerStarted","Data":"953dd17d0a5fcabde321ee0a802f7ab31f644d0ccba46f73569551646e2c452d"} Feb 23 13:05:55.154443 master-0 kubenswrapper[7784]: I0223 13:05:55.154103 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" event={"ID":"898e6c96-73d5-4dc5-a383-986599a5bcd9","Type":"ContainerStarted","Data":"f1541d3cbb640b21177c45a46a1eddd016753fb35a3f044e6355c1e36d49d0bd"} Feb 23 13:05:55.154443 master-0 kubenswrapper[7784]: I0223 13:05:55.154413 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:55.162131 master-0 kubenswrapper[7784]: I0223 13:05:55.162064 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:05:55.166776 master-0 kubenswrapper[7784]: I0223 13:05:55.166716 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerStarted","Data":"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034"} Feb 23 13:05:55.166956 master-0 kubenswrapper[7784]: I0223 13:05:55.166907 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="kube-rbac-proxy" containerID="cri-o://5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" gracePeriod=30 Feb 23 13:05:55.167330 master-0 kubenswrapper[7784]: I0223 13:05:55.167290 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="machine-approver-controller" containerID="cri-o://3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" gracePeriod=30 Feb 23 13:05:55.176695 master-0 kubenswrapper[7784]: I0223 13:05:55.171862 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" event={"ID":"47dedc5d-1288-4020-b481-5dca68a7d437","Type":"ContainerStarted","Data":"03c434f6de970d6fadea568234ec0af471fa3dec238b0bd5f6a6179ccb8e7df1"} Feb 23 13:05:55.186738 master-0 kubenswrapper[7784]: I0223 13:05:55.182513 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" event={"ID":"bf57b864-25d7-4420-9052-04dd580a9f7d","Type":"ContainerStarted","Data":"d0f028f5c9ba3cbdb9aa71d077d68cd25f9f1bd1f015e402871ed79b04b1c8f3"} Feb 23 13:05:55.187204 master-0 kubenswrapper[7784]: I0223 13:05:55.187141 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" podStartSLOduration=3.090498151 podStartE2EDuration="14.187122002s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.622857875 +0000 UTC m=+285.357711518" lastFinishedPulling="2026-02-23 13:05:53.719481696 +0000 UTC m=+296.454335369" observedRunningTime="2026-02-23 13:05:55.1767798 +0000 UTC m=+297.911633453" watchObservedRunningTime="2026-02-23 13:05:55.187122002 +0000 UTC m=+297.921975645" Feb 23 13:05:55.187623 master-0 kubenswrapper[7784]: I0223 13:05:55.187551 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" event={"ID":"affc63b7-db45-429d-82ff-e50f6aae51dc","Type":"ContainerStarted","Data":"41202e9f2790a7f6235a0ce9eb87baca7cb432343b22dcbd777e862cc1562fd9"} Feb 23 13:05:55.195469 master-0 kubenswrapper[7784]: I0223 13:05:55.195406 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerStarted","Data":"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89"} Feb 23 13:05:55.195588 master-0 kubenswrapper[7784]: I0223 13:05:55.195477 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerStarted","Data":"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c"} Feb 23 13:05:55.195588 master-0 kubenswrapper[7784]: I0223 13:05:55.195492 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerStarted","Data":"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13"} Feb 23 13:05:55.198040 master-0 kubenswrapper[7784]: I0223 13:05:55.197866 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-sswng" event={"ID":"ce55de54-8441-4a16-8b57-598042869000","Type":"ContainerStarted","Data":"1ef0487ccb5ab3ee737e4b94ff7cde5f45bc4baa3661deb6510b1e0297a7fc4b"} Feb 23 13:05:55.202761 master-0 kubenswrapper[7784]: I0223 13:05:55.202675 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" event={"ID":"7cadeb05-9298-4bcf-b6f2-659c68eba020","Type":"ContainerStarted","Data":"fd7bef0188f460933548878f448786d8b5f32190bd0c96a289a6a492efcec952"} Feb 23 13:05:55.203000 master-0 kubenswrapper[7784]: I0223 13:05:55.202963 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:55.207907 master-0 kubenswrapper[7784]: I0223 13:05:55.207584 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" podStartSLOduration=2.771650281 podStartE2EDuration="14.207553982s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.865485965 +0000 UTC m=+285.600339608" lastFinishedPulling="2026-02-23 13:05:54.301389666 +0000 UTC m=+297.036243309" observedRunningTime="2026-02-23 13:05:55.205676706 +0000 UTC m=+297.940530359" watchObservedRunningTime="2026-02-23 13:05:55.207553982 +0000 UTC m=+297.942407625" Feb 23 13:05:55.209938 master-0 kubenswrapper[7784]: I0223 13:05:55.209904 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:05:55.212657 master-0 kubenswrapper[7784]: I0223 13:05:55.212258 7784 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="9e7f44d5060fdf1a6451fc7abe4dd4b1ac2744ce0d994162e1ca6e694e18353f" exitCode=0 Feb 23 13:05:55.212657 master-0 kubenswrapper[7784]: I0223 13:05:55.212329 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerDied","Data":"9e7f44d5060fdf1a6451fc7abe4dd4b1ac2744ce0d994162e1ca6e694e18353f"} Feb 23 13:05:55.220602 master-0 kubenswrapper[7784]: I0223 13:05:55.220545 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" event={"ID":"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6","Type":"ContainerStarted","Data":"7d1ab7b3ec6bc4e5165a8cbcb62bfdeed6825e992cbcfd99f88821b4c036afa2"} Feb 23 13:05:55.220960 master-0 kubenswrapper[7784]: I0223 13:05:55.220618 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" event={"ID":"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6","Type":"ContainerStarted","Data":"a2d565dce370ee80cd2305c4b3431c3d4fce5cbfe8b62fc1078de25fc0114504"} Feb 23 13:05:55.223591 master-0 kubenswrapper[7784]: I0223 13:05:55.223236 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" event={"ID":"57803492-e1dd-4994-8330-1e9b393d54fd","Type":"ContainerStarted","Data":"dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685"} Feb 23 13:05:55.223591 master-0 kubenswrapper[7784]: I0223 13:05:55.223370 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" event={"ID":"57803492-e1dd-4994-8330-1e9b393d54fd","Type":"ContainerStarted","Data":"72a51e109fca9a61ceacafdcb5f6516974b4d6ec5e2db10862a61f64b8e25010"} Feb 23 13:05:55.238614 master-0 kubenswrapper[7784]: I0223 13:05:55.238519 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" podStartSLOduration=2.597701429 podStartE2EDuration="14.238496248s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.078270166 +0000 UTC m=+284.813123819" lastFinishedPulling="2026-02-23 13:05:53.719064985 +0000 UTC m=+296.453918638" observedRunningTime="2026-02-23 13:05:55.236493059 +0000 UTC m=+297.971346712" watchObservedRunningTime="2026-02-23 13:05:55.238496248 +0000 UTC m=+297.973349891" Feb 23 13:05:55.258819 master-0 kubenswrapper[7784]: I0223 13:05:55.258727 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" podStartSLOduration=6.0615824 podStartE2EDuration="14.258708182s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.429572262 +0000 UTC m=+285.164425905" lastFinishedPulling="2026-02-23 13:05:50.626698024 +0000 UTC m=+293.361551687" observedRunningTime="2026-02-23 13:05:55.256397705 +0000 UTC m=+297.991251348" watchObservedRunningTime="2026-02-23 13:05:55.258708182 +0000 UTC m=+297.993561815" Feb 23 13:05:55.285922 master-0 kubenswrapper[7784]: I0223 13:05:55.285809 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" podStartSLOduration=3.351955281 podStartE2EDuration="14.285785263s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.78467099 +0000 UTC m=+285.519524633" lastFinishedPulling="2026-02-23 13:05:53.718500952 +0000 UTC m=+296.453354615" observedRunningTime="2026-02-23 13:05:55.280448973 +0000 UTC m=+298.015302626" watchObservedRunningTime="2026-02-23 13:05:55.285785263 +0000 UTC m=+298.020638906" Feb 23 13:05:55.301296 master-0 kubenswrapper[7784]: I0223 13:05:55.301171 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" podStartSLOduration=2.934683004 podStartE2EDuration="14.301149108s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.865504216 +0000 UTC m=+285.600357859" lastFinishedPulling="2026-02-23 13:05:54.23197031 +0000 UTC m=+296.966823963" observedRunningTime="2026-02-23 13:05:55.297272734 +0000 UTC m=+298.032126367" watchObservedRunningTime="2026-02-23 13:05:55.301149108 +0000 UTC m=+298.036002751" Feb 23 13:05:55.327268 master-0 kubenswrapper[7784]: I0223 13:05:55.327208 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:55.352102 master-0 kubenswrapper[7784]: I0223 13:05:55.352023 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-sswng" podStartSLOduration=3.256498299 podStartE2EDuration="14.352005302s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.624239109 +0000 UTC m=+285.359092752" lastFinishedPulling="2026-02-23 13:05:53.719746102 +0000 UTC m=+296.454599755" observedRunningTime="2026-02-23 13:05:55.351326915 +0000 UTC m=+298.086180578" watchObservedRunningTime="2026-02-23 13:05:55.352005302 +0000 UTC m=+298.086858945" Feb 23 13:05:55.380870 master-0 kubenswrapper[7784]: I0223 13:05:55.380327 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" podStartSLOduration=2.618183689 podStartE2EDuration="14.380293003s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:41.954572744 +0000 UTC m=+284.689426407" lastFinishedPulling="2026-02-23 13:05:53.716682068 +0000 UTC m=+296.451535721" observedRunningTime="2026-02-23 13:05:55.373316012 +0000 UTC m=+298.108169665" watchObservedRunningTime="2026-02-23 13:05:55.380293003 +0000 UTC m=+298.115146646" Feb 23 13:05:55.381757 master-0 kubenswrapper[7784]: I0223 13:05:55.381708 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config\") pod \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " Feb 23 13:05:55.381881 master-0 kubenswrapper[7784]: I0223 13:05:55.381867 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls\") pod \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " Feb 23 13:05:55.382267 master-0 kubenswrapper[7784]: I0223 13:05:55.382188 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfjkd\" (UniqueName: \"kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd\") pod \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " Feb 23 13:05:55.382991 master-0 kubenswrapper[7784]: I0223 13:05:55.382882 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config" (OuterVolumeSpecName: "config") pod "21541c8c-3d6b-4af1-a03a-d899cebb9c26" (UID: "21541c8c-3d6b-4af1-a03a-d899cebb9c26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:55.383681 master-0 kubenswrapper[7784]: I0223 13:05:55.383492 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "21541c8c-3d6b-4af1-a03a-d899cebb9c26" (UID: "21541c8c-3d6b-4af1-a03a-d899cebb9c26"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:55.383778 master-0 kubenswrapper[7784]: I0223 13:05:55.382444 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config\") pod \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\" (UID: \"21541c8c-3d6b-4af1-a03a-d899cebb9c26\") " Feb 23 13:05:55.384216 master-0 kubenswrapper[7784]: I0223 13:05:55.384202 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:55.384330 master-0 kubenswrapper[7784]: I0223 13:05:55.384299 7784 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21541c8c-3d6b-4af1-a03a-d899cebb9c26-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:55.388485 master-0 kubenswrapper[7784]: I0223 13:05:55.385950 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd" (OuterVolumeSpecName: "kube-api-access-bfjkd") pod "21541c8c-3d6b-4af1-a03a-d899cebb9c26" (UID: "21541c8c-3d6b-4af1-a03a-d899cebb9c26"). InnerVolumeSpecName "kube-api-access-bfjkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:05:55.388810 master-0 kubenswrapper[7784]: I0223 13:05:55.388754 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "21541c8c-3d6b-4af1-a03a-d899cebb9c26" (UID: "21541c8c-3d6b-4af1-a03a-d899cebb9c26"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:05:55.404276 master-0 kubenswrapper[7784]: I0223 13:05:55.404153 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podStartSLOduration=9.404122875 podStartE2EDuration="9.404122875s" podCreationTimestamp="2026-02-23 13:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:55.401493911 +0000 UTC m=+298.136347554" watchObservedRunningTime="2026-02-23 13:05:55.404122875 +0000 UTC m=+298.138976508" Feb 23 13:05:55.404543 master-0 kubenswrapper[7784]: I0223 13:05:55.404482 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb"] Feb 23 13:05:55.486498 master-0 kubenswrapper[7784]: I0223 13:05:55.486414 7784 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21541c8c-3d6b-4af1-a03a-d899cebb9c26-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:55.486498 master-0 kubenswrapper[7784]: I0223 13:05:55.486475 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjkd\" (UniqueName: \"kubernetes.io/projected/21541c8c-3d6b-4af1-a03a-d899cebb9c26-kube-api-access-bfjkd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:55.828007 master-0 kubenswrapper[7784]: I0223 13:05:55.827931 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vnmk2"] Feb 23 13:05:55.828256 master-0 kubenswrapper[7784]: E0223 13:05:55.828219 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="kube-rbac-proxy" Feb 23 13:05:55.828256 master-0 kubenswrapper[7784]: I0223 13:05:55.828240 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="kube-rbac-proxy" Feb 23 13:05:55.828427 master-0 kubenswrapper[7784]: E0223 13:05:55.828261 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="machine-approver-controller" Feb 23 13:05:55.828427 master-0 kubenswrapper[7784]: I0223 13:05:55.828274 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="machine-approver-controller" Feb 23 13:05:55.828515 master-0 kubenswrapper[7784]: I0223 13:05:55.828466 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="machine-approver-controller" Feb 23 13:05:55.828515 master-0 kubenswrapper[7784]: I0223 13:05:55.828490 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerName="kube-rbac-proxy" Feb 23 13:05:55.829462 master-0 kubenswrapper[7784]: I0223 13:05:55.829432 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.837195 master-0 kubenswrapper[7784]: I0223 13:05:55.834195 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dndpz" Feb 23 13:05:55.845229 master-0 kubenswrapper[7784]: I0223 13:05:55.845150 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnmk2"] Feb 23 13:05:55.893365 master-0 kubenswrapper[7784]: I0223 13:05:55.893232 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.893642 master-0 kubenswrapper[7784]: I0223 13:05:55.893572 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfl9v\" (UniqueName: \"kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.893791 master-0 kubenswrapper[7784]: I0223 13:05:55.893757 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.995248 master-0 kubenswrapper[7784]: I0223 13:05:55.995172 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.995511 master-0 kubenswrapper[7784]: I0223 13:05:55.995475 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfl9v\" (UniqueName: \"kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.995618 master-0 kubenswrapper[7784]: I0223 13:05:55.995584 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.996090 master-0 kubenswrapper[7784]: I0223 13:05:55.996033 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:55.996228 master-0 kubenswrapper[7784]: I0223 13:05:55.996137 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:56.015851 master-0 kubenswrapper[7784]: I0223 13:05:56.015743 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfl9v\" (UniqueName: \"kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:56.030806 master-0 kubenswrapper[7784]: I0223 13:05:56.030743 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w7wq9"] Feb 23 13:05:56.032420 master-0 kubenswrapper[7784]: I0223 13:05:56.032377 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.036732 master-0 kubenswrapper[7784]: I0223 13:05:56.036676 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7wq9"] Feb 23 13:05:56.038589 master-0 kubenswrapper[7784]: I0223 13:05:56.037202 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-t7xq8" Feb 23 13:05:56.097775 master-0 kubenswrapper[7784]: I0223 13:05:56.097603 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.097775 master-0 kubenswrapper[7784]: I0223 13:05:56.097699 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.097971 master-0 kubenswrapper[7784]: I0223 13:05:56.097788 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j744d\" (UniqueName: \"kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.157061 master-0 kubenswrapper[7784]: I0223 13:05:56.156574 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:05:56.199169 master-0 kubenswrapper[7784]: I0223 13:05:56.199080 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.199169 master-0 kubenswrapper[7784]: I0223 13:05:56.199159 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.199447 master-0 kubenswrapper[7784]: I0223 13:05:56.199218 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j744d\" (UniqueName: \"kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.200611 master-0 kubenswrapper[7784]: I0223 13:05:56.200527 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.200767 master-0 kubenswrapper[7784]: I0223 13:05:56.200717 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.219843 master-0 kubenswrapper[7784]: I0223 13:05:56.219744 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j744d\" (UniqueName: \"kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.254086 master-0 kubenswrapper[7784]: I0223 13:05:56.254041 7784 generic.go:334] "Generic (PLEG): container finished" podID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerID="3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" exitCode=0 Feb 23 13:05:56.254354 master-0 kubenswrapper[7784]: I0223 13:05:56.254310 7784 generic.go:334] "Generic (PLEG): container finished" podID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" containerID="5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" exitCode=0 Feb 23 13:05:56.254893 master-0 kubenswrapper[7784]: I0223 13:05:56.254808 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerDied","Data":"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034"} Feb 23 13:05:56.254953 master-0 kubenswrapper[7784]: I0223 13:05:56.254908 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerDied","Data":"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af"} Feb 23 13:05:56.254953 master-0 kubenswrapper[7784]: I0223 13:05:56.254920 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" event={"ID":"21541c8c-3d6b-4af1-a03a-d899cebb9c26","Type":"ContainerDied","Data":"884e5b0dfd1e1d79bfbe276b1a0f2dac6f4b84154b57ab496bf6e8d3c5c08e2b"} Feb 23 13:05:56.254953 master-0 kubenswrapper[7784]: I0223 13:05:56.254944 7784 scope.go:117] "RemoveContainer" containerID="3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" Feb 23 13:05:56.255147 master-0 kubenswrapper[7784]: I0223 13:05:56.255124 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr" Feb 23 13:05:56.280510 master-0 kubenswrapper[7784]: I0223 13:05:56.280465 7784 scope.go:117] "RemoveContainer" containerID="5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" Feb 23 13:05:56.293828 master-0 kubenswrapper[7784]: I0223 13:05:56.292594 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr"] Feb 23 13:05:56.294477 master-0 kubenswrapper[7784]: I0223 13:05:56.294413 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-fhsgr"] Feb 23 13:05:56.305652 master-0 kubenswrapper[7784]: I0223 13:05:56.305608 7784 scope.go:117] "RemoveContainer" containerID="3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" Feb 23 13:05:56.306050 master-0 kubenswrapper[7784]: E0223 13:05:56.305997 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034\": container with ID starting with 3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034 not found: ID does not exist" containerID="3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" Feb 23 13:05:56.306097 master-0 kubenswrapper[7784]: I0223 13:05:56.306056 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034"} err="failed to get container status \"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034\": rpc error: code = NotFound desc = could not find container \"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034\": container with ID starting with 3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034 not found: ID does not exist" Feb 23 13:05:56.306097 master-0 kubenswrapper[7784]: I0223 13:05:56.306086 7784 scope.go:117] "RemoveContainer" containerID="5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" Feb 23 13:05:56.306421 master-0 kubenswrapper[7784]: E0223 13:05:56.306389 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af\": container with ID starting with 5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af not found: ID does not exist" containerID="5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" Feb 23 13:05:56.306467 master-0 kubenswrapper[7784]: I0223 13:05:56.306431 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af"} err="failed to get container status \"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af\": rpc error: code = NotFound desc = could not find container \"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af\": container with ID starting with 5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af not found: ID does not exist" Feb 23 13:05:56.306467 master-0 kubenswrapper[7784]: I0223 13:05:56.306461 7784 scope.go:117] "RemoveContainer" containerID="3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034" Feb 23 13:05:56.306750 master-0 kubenswrapper[7784]: I0223 13:05:56.306724 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034"} err="failed to get container status \"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034\": rpc error: code = NotFound desc = could not find container \"3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034\": container with ID starting with 3749de81befe34919140911cbd1c5a6fc7c613a37706e3a68ab55212feb5c034 not found: ID does not exist" Feb 23 13:05:56.306750 master-0 kubenswrapper[7784]: I0223 13:05:56.306746 7784 scope.go:117] "RemoveContainer" containerID="5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af" Feb 23 13:05:56.307373 master-0 kubenswrapper[7784]: I0223 13:05:56.307257 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af"} err="failed to get container status \"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af\": rpc error: code = NotFound desc = could not find container \"5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af\": container with ID starting with 5539ea6250345a51bb79d526ee828b569f53235d4ec01612e2b092e88f70e4af not found: ID does not exist" Feb 23 13:05:56.324804 master-0 kubenswrapper[7784]: I0223 13:05:56.324746 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz"] Feb 23 13:05:56.325662 master-0 kubenswrapper[7784]: I0223 13:05:56.325639 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.327772 master-0 kubenswrapper[7784]: I0223 13:05:56.327517 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nhhd2" Feb 23 13:05:56.327772 master-0 kubenswrapper[7784]: I0223 13:05:56.327572 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:05:56.329671 master-0 kubenswrapper[7784]: I0223 13:05:56.327782 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:05:56.329671 master-0 kubenswrapper[7784]: I0223 13:05:56.328528 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:05:56.331933 master-0 kubenswrapper[7784]: I0223 13:05:56.331528 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:05:56.331933 master-0 kubenswrapper[7784]: I0223 13:05:56.331651 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:05:56.370414 master-0 kubenswrapper[7784]: I0223 13:05:56.368736 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:05:56.402063 master-0 kubenswrapper[7784]: I0223 13:05:56.402012 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjkkc\" (UniqueName: \"kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.402245 master-0 kubenswrapper[7784]: I0223 13:05:56.402159 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.402245 master-0 kubenswrapper[7784]: I0223 13:05:56.402240 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.402393 master-0 kubenswrapper[7784]: I0223 13:05:56.402367 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.504287 master-0 kubenswrapper[7784]: I0223 13:05:56.504231 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjkkc\" (UniqueName: \"kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.504650 master-0 kubenswrapper[7784]: I0223 13:05:56.504366 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.504650 master-0 kubenswrapper[7784]: I0223 13:05:56.504529 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.504650 master-0 kubenswrapper[7784]: I0223 13:05:56.504575 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.505291 master-0 kubenswrapper[7784]: I0223 13:05:56.505260 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.505527 master-0 kubenswrapper[7784]: I0223 13:05:56.505499 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.512862 master-0 kubenswrapper[7784]: I0223 13:05:56.512800 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.525522 master-0 kubenswrapper[7784]: I0223 13:05:56.525486 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjkkc\" (UniqueName: \"kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.645983 master-0 kubenswrapper[7784]: I0223 13:05:56.645941 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:05:56.647478 master-0 kubenswrapper[7784]: I0223 13:05:56.647375 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnmk2"] Feb 23 13:05:56.667397 master-0 kubenswrapper[7784]: W0223 13:05:56.667320 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d134032_1c35_4b69_9336_bcdc9c1cb87d.slice/crio-fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15 WatchSource:0}: Error finding container fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15: Status 404 returned error can't find the container with id fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15 Feb 23 13:05:56.782911 master-0 kubenswrapper[7784]: I0223 13:05:56.782839 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7wq9"] Feb 23 13:05:57.237415 master-0 kubenswrapper[7784]: W0223 13:05:57.237362 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5284f9_cbb7_400b_ab39_bfef60ec198b.slice/crio-652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766 WatchSource:0}: Error finding container 652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766: Status 404 returned error can't find the container with id 652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766 Feb 23 13:05:57.261877 master-0 kubenswrapper[7784]: I0223 13:05:57.261798 7784 generic.go:334] "Generic (PLEG): container finished" podID="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" containerID="2599e93ebbbf10e3a2918075f4c5d9d7aa6ac90db44d1f03155a36a2b83d2e96" exitCode=0 Feb 23 13:05:57.262181 master-0 kubenswrapper[7784]: I0223 13:05:57.261892 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnmk2" event={"ID":"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3","Type":"ContainerDied","Data":"2599e93ebbbf10e3a2918075f4c5d9d7aa6ac90db44d1f03155a36a2b83d2e96"} Feb 23 13:05:57.262181 master-0 kubenswrapper[7784]: I0223 13:05:57.261958 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnmk2" event={"ID":"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3","Type":"ContainerStarted","Data":"f5a1d137318bed9fd566b8260909991bd75bf6152bc142fe74433e2215565edb"} Feb 23 13:05:57.264011 master-0 kubenswrapper[7784]: I0223 13:05:57.263954 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7wq9" event={"ID":"3a5284f9-cbb7-400b-ab39-bfef60ec198b","Type":"ContainerStarted","Data":"652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766"} Feb 23 13:05:57.265787 master-0 kubenswrapper[7784]: I0223 13:05:57.265734 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" event={"ID":"0d134032-1c35-4b69-9336-bcdc9c1cb87d","Type":"ContainerStarted","Data":"00e144732371c125ee2a4adf569af5e32495ee50b06e6c1dad771ab4601c0043"} Feb 23 13:05:57.265787 master-0 kubenswrapper[7784]: I0223 13:05:57.265777 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" event={"ID":"0d134032-1c35-4b69-9336-bcdc9c1cb87d","Type":"ContainerStarted","Data":"fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15"} Feb 23 13:05:57.268035 master-0 kubenswrapper[7784]: I0223 13:05:57.267959 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="cluster-cloud-controller-manager" containerID="cri-o://87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" gracePeriod=30 Feb 23 13:05:57.268313 master-0 kubenswrapper[7784]: I0223 13:05:57.268246 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="kube-rbac-proxy" containerID="cri-o://1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" gracePeriod=30 Feb 23 13:05:57.268425 master-0 kubenswrapper[7784]: I0223 13:05:57.268333 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="config-sync-controllers" containerID="cri-o://afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" gracePeriod=30 Feb 23 13:05:57.419250 master-0 kubenswrapper[7784]: I0223 13:05:57.418794 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwhpv"] Feb 23 13:05:57.420001 master-0 kubenswrapper[7784]: I0223 13:05:57.419898 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.421424 master-0 kubenswrapper[7784]: I0223 13:05:57.421393 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-gm6kx" Feb 23 13:05:57.433975 master-0 kubenswrapper[7784]: I0223 13:05:57.433440 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwhpv"] Feb 23 13:05:57.520079 master-0 kubenswrapper[7784]: I0223 13:05:57.520024 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.520207 master-0 kubenswrapper[7784]: I0223 13:05:57.520138 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.520292 master-0 kubenswrapper[7784]: I0223 13:05:57.520217 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmcjv\" (UniqueName: \"kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.529726 master-0 kubenswrapper[7784]: I0223 13:05:57.529678 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:57.537742 master-0 kubenswrapper[7784]: I0223 13:05:57.537699 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21541c8c-3d6b-4af1-a03a-d899cebb9c26" path="/var/lib/kubelet/pods/21541c8c-3d6b-4af1-a03a-d899cebb9c26/volumes" Feb 23 13:05:57.620726 master-0 kubenswrapper[7784]: I0223 13:05:57.620656 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube\") pod \"bd537759-1528-465f-a3bb-e56fbf4cee74\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " Feb 23 13:05:57.620950 master-0 kubenswrapper[7784]: I0223 13:05:57.620820 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "bd537759-1528-465f-a3bb-e56fbf4cee74" (UID: "bd537759-1528-465f-a3bb-e56fbf4cee74"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:05:57.620950 master-0 kubenswrapper[7784]: I0223 13:05:57.620857 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls\") pod \"bd537759-1528-465f-a3bb-e56fbf4cee74\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " Feb 23 13:05:57.621037 master-0 kubenswrapper[7784]: I0223 13:05:57.620996 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images\") pod \"bd537759-1528-465f-a3bb-e56fbf4cee74\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " Feb 23 13:05:57.621136 master-0 kubenswrapper[7784]: I0223 13:05:57.621113 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvdj2\" (UniqueName: \"kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2\") pod \"bd537759-1528-465f-a3bb-e56fbf4cee74\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " Feb 23 13:05:57.621245 master-0 kubenswrapper[7784]: I0223 13:05:57.621224 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config\") pod \"bd537759-1528-465f-a3bb-e56fbf4cee74\" (UID: \"bd537759-1528-465f-a3bb-e56fbf4cee74\") " Feb 23 13:05:57.621501 master-0 kubenswrapper[7784]: I0223 13:05:57.621466 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images" (OuterVolumeSpecName: "images") pod "bd537759-1528-465f-a3bb-e56fbf4cee74" (UID: "bd537759-1528-465f-a3bb-e56fbf4cee74"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:57.621581 master-0 kubenswrapper[7784]: I0223 13:05:57.621508 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.621964 master-0 kubenswrapper[7784]: I0223 13:05:57.621887 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "bd537759-1528-465f-a3bb-e56fbf4cee74" (UID: "bd537759-1528-465f-a3bb-e56fbf4cee74"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:57.622033 master-0 kubenswrapper[7784]: I0223 13:05:57.621927 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmcjv\" (UniqueName: \"kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.622213 master-0 kubenswrapper[7784]: I0223 13:05:57.622175 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.622305 master-0 kubenswrapper[7784]: I0223 13:05:57.622269 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.622477 master-0 kubenswrapper[7784]: I0223 13:05:57.622449 7784 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:57.622477 master-0 kubenswrapper[7784]: I0223 13:05:57.622475 7784 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bd537759-1528-465f-a3bb-e56fbf4cee74-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:57.622596 master-0 kubenswrapper[7784]: I0223 13:05:57.622490 7784 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd537759-1528-465f-a3bb-e56fbf4cee74-images\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:57.622686 master-0 kubenswrapper[7784]: I0223 13:05:57.622661 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.624094 master-0 kubenswrapper[7784]: I0223 13:05:57.624064 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2" (OuterVolumeSpecName: "kube-api-access-cvdj2") pod "bd537759-1528-465f-a3bb-e56fbf4cee74" (UID: "bd537759-1528-465f-a3bb-e56fbf4cee74"). InnerVolumeSpecName "kube-api-access-cvdj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:05:57.625681 master-0 kubenswrapper[7784]: I0223 13:05:57.625627 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "bd537759-1528-465f-a3bb-e56fbf4cee74" (UID: "bd537759-1528-465f-a3bb-e56fbf4cee74"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:05:57.634544 master-0 kubenswrapper[7784]: I0223 13:05:57.634520 7784 scope.go:117] "RemoveContainer" containerID="1ac5db7a64f2f6a417b7aa444094f3b04d08a91a07d4cc6037194f4d5f089c43" Feb 23 13:05:57.639576 master-0 kubenswrapper[7784]: I0223 13:05:57.639530 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmcjv\" (UniqueName: \"kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:57.725454 master-0 kubenswrapper[7784]: I0223 13:05:57.724567 7784 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd537759-1528-465f-a3bb-e56fbf4cee74-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:57.725454 master-0 kubenswrapper[7784]: I0223 13:05:57.724624 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvdj2\" (UniqueName: \"kubernetes.io/projected/bd537759-1528-465f-a3bb-e56fbf4cee74-kube-api-access-cvdj2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:57.844088 master-0 kubenswrapper[7784]: I0223 13:05:57.844038 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-gm6kx" Feb 23 13:05:57.852810 master-0 kubenswrapper[7784]: I0223 13:05:57.852762 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:05:58.271745 master-0 kubenswrapper[7784]: I0223 13:05:58.271682 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwhpv"] Feb 23 13:05:58.277678 master-0 kubenswrapper[7784]: I0223 13:05:58.277635 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" event={"ID":"0d134032-1c35-4b69-9336-bcdc9c1cb87d","Type":"ContainerStarted","Data":"49844090cc1129b2d843c2317ee9aa9edebd16f2ac5c94c083315778ac1b8f03"} Feb 23 13:05:58.279583 master-0 kubenswrapper[7784]: W0223 13:05:58.278661 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0e3072_a35c_4404_891c_f31fafd0b4b1.slice/crio-9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba WatchSource:0}: Error finding container 9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba: Status 404 returned error can't find the container with id 9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284328 7784 generic.go:334] "Generic (PLEG): container finished" podID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" exitCode=0 Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284380 7784 generic.go:334] "Generic (PLEG): container finished" podID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" exitCode=0 Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284392 7784 generic.go:334] "Generic (PLEG): container finished" podID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" exitCode=0 Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284443 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerDied","Data":"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89"} Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284478 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerDied","Data":"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c"} Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284498 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerDied","Data":"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13"} Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284516 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" event={"ID":"bd537759-1528-465f-a3bb-e56fbf4cee74","Type":"ContainerDied","Data":"477048cf7928248cd8970baea6983da04450948ff32b9ff1904c0c54deeb1651"} Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284539 7784 scope.go:117] "RemoveContainer" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" Feb 23 13:05:58.284863 master-0 kubenswrapper[7784]: I0223 13:05:58.284681 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb" Feb 23 13:05:58.288102 master-0 kubenswrapper[7784]: I0223 13:05:58.288062 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerStarted","Data":"32088e4f48e7f6e833fcb88730321358fea1c298090821bdfe130f137e38f95c"} Feb 23 13:05:58.288621 master-0 kubenswrapper[7784]: I0223 13:05:58.288596 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:05:58.290626 master-0 kubenswrapper[7784]: I0223 13:05:58.290591 7784 generic.go:334] "Generic (PLEG): container finished" podID="3a5284f9-cbb7-400b-ab39-bfef60ec198b" containerID="d24c6d197de6f9706c62ad38004bd20b34f4c8cf1d966f2a08d91932b823f26a" exitCode=0 Feb 23 13:05:58.290626 master-0 kubenswrapper[7784]: I0223 13:05:58.290622 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7wq9" event={"ID":"3a5284f9-cbb7-400b-ab39-bfef60ec198b","Type":"ContainerDied","Data":"d24c6d197de6f9706c62ad38004bd20b34f4c8cf1d966f2a08d91932b823f26a"} Feb 23 13:05:58.298210 master-0 kubenswrapper[7784]: I0223 13:05:58.298094 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" podStartSLOduration=2.298065938 podStartE2EDuration="2.298065938s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:58.294525251 +0000 UTC m=+301.029378894" watchObservedRunningTime="2026-02-23 13:05:58.298065938 +0000 UTC m=+301.032919591" Feb 23 13:05:58.316934 master-0 kubenswrapper[7784]: I0223 13:05:58.316883 7784 scope.go:117] "RemoveContainer" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" Feb 23 13:05:58.348494 master-0 kubenswrapper[7784]: I0223 13:05:58.348443 7784 scope.go:117] "RemoveContainer" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" Feb 23 13:05:58.348787 master-0 kubenswrapper[7784]: I0223 13:05:58.348721 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podStartSLOduration=2.441003909 podStartE2EDuration="17.348705285s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:42.39184435 +0000 UTC m=+285.126697993" lastFinishedPulling="2026-02-23 13:05:57.299545726 +0000 UTC m=+300.034399369" observedRunningTime="2026-02-23 13:05:58.346097531 +0000 UTC m=+301.080951194" watchObservedRunningTime="2026-02-23 13:05:58.348705285 +0000 UTC m=+301.083558918" Feb 23 13:05:58.368383 master-0 kubenswrapper[7784]: I0223 13:05:58.368313 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb"] Feb 23 13:05:58.372588 master-0 kubenswrapper[7784]: I0223 13:05:58.372500 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gj9fb"] Feb 23 13:05:58.381528 master-0 kubenswrapper[7784]: I0223 13:05:58.381060 7784 scope.go:117] "RemoveContainer" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" Feb 23 13:05:58.383246 master-0 kubenswrapper[7784]: E0223 13:05:58.382693 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": container with ID starting with 1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89 not found: ID does not exist" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" Feb 23 13:05:58.383246 master-0 kubenswrapper[7784]: I0223 13:05:58.382743 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89"} err="failed to get container status \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": rpc error: code = NotFound desc = could not find container \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": container with ID starting with 1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89 not found: ID does not exist" Feb 23 13:05:58.383246 master-0 kubenswrapper[7784]: I0223 13:05:58.382780 7784 scope.go:117] "RemoveContainer" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" Feb 23 13:05:58.386963 master-0 kubenswrapper[7784]: E0223 13:05:58.386836 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": container with ID starting with afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c not found: ID does not exist" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" Feb 23 13:05:58.387043 master-0 kubenswrapper[7784]: I0223 13:05:58.386937 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c"} err="failed to get container status \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": rpc error: code = NotFound desc = could not find container \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": container with ID starting with afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c not found: ID does not exist" Feb 23 13:05:58.387043 master-0 kubenswrapper[7784]: I0223 13:05:58.387004 7784 scope.go:117] "RemoveContainer" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" Feb 23 13:05:58.387772 master-0 kubenswrapper[7784]: E0223 13:05:58.387746 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": container with ID starting with 87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13 not found: ID does not exist" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" Feb 23 13:05:58.387836 master-0 kubenswrapper[7784]: I0223 13:05:58.387780 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13"} err="failed to get container status \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": rpc error: code = NotFound desc = could not find container \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": container with ID starting with 87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13 not found: ID does not exist" Feb 23 13:05:58.387836 master-0 kubenswrapper[7784]: I0223 13:05:58.387798 7784 scope.go:117] "RemoveContainer" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" Feb 23 13:05:58.388184 master-0 kubenswrapper[7784]: I0223 13:05:58.388161 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89"} err="failed to get container status \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": rpc error: code = NotFound desc = could not find container \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": container with ID starting with 1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89 not found: ID does not exist" Feb 23 13:05:58.388264 master-0 kubenswrapper[7784]: I0223 13:05:58.388183 7784 scope.go:117] "RemoveContainer" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" Feb 23 13:05:58.388628 master-0 kubenswrapper[7784]: I0223 13:05:58.388587 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c"} err="failed to get container status \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": rpc error: code = NotFound desc = could not find container \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": container with ID starting with afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c not found: ID does not exist" Feb 23 13:05:58.388628 master-0 kubenswrapper[7784]: I0223 13:05:58.388611 7784 scope.go:117] "RemoveContainer" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" Feb 23 13:05:58.389259 master-0 kubenswrapper[7784]: I0223 13:05:58.388939 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13"} err="failed to get container status \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": rpc error: code = NotFound desc = could not find container \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": container with ID starting with 87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13 not found: ID does not exist" Feb 23 13:05:58.389259 master-0 kubenswrapper[7784]: I0223 13:05:58.388992 7784 scope.go:117] "RemoveContainer" containerID="1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89" Feb 23 13:05:58.389676 master-0 kubenswrapper[7784]: I0223 13:05:58.389598 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89"} err="failed to get container status \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": rpc error: code = NotFound desc = could not find container \"1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89\": container with ID starting with 1d45a6bb33e45df1fb8ededba4e7863c460ab6fca5c29ee5ef1a9b915ab18c89 not found: ID does not exist" Feb 23 13:05:58.389676 master-0 kubenswrapper[7784]: I0223 13:05:58.389667 7784 scope.go:117] "RemoveContainer" containerID="afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c" Feb 23 13:05:58.390399 master-0 kubenswrapper[7784]: I0223 13:05:58.390327 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c"} err="failed to get container status \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": rpc error: code = NotFound desc = could not find container \"afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c\": container with ID starting with afe735f662bdfe3240a5477d757f34222d7167521e38743789e3d1e5371cee8c not found: ID does not exist" Feb 23 13:05:58.390493 master-0 kubenswrapper[7784]: I0223 13:05:58.390394 7784 scope.go:117] "RemoveContainer" containerID="87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13" Feb 23 13:05:58.391094 master-0 kubenswrapper[7784]: I0223 13:05:58.391012 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13"} err="failed to get container status \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": rpc error: code = NotFound desc = could not find container \"87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13\": container with ID starting with 87fe226a82128e5c95871b51f4b5f83760c02c823daab07f224c63e6c726ef13 not found: ID does not exist" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: I0223 13:05:58.397173 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf"] Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: E0223 13:05:58.397585 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="cluster-cloud-controller-manager" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: I0223 13:05:58.397604 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="cluster-cloud-controller-manager" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: E0223 13:05:58.397621 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="kube-rbac-proxy" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: I0223 13:05:58.397818 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="kube-rbac-proxy" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: E0223 13:05:58.397871 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="config-sync-controllers" Feb 23 13:05:58.398044 master-0 kubenswrapper[7784]: I0223 13:05:58.397884 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="config-sync-controllers" Feb 23 13:05:58.398538 master-0 kubenswrapper[7784]: I0223 13:05:58.398396 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="cluster-cloud-controller-manager" Feb 23 13:05:58.398538 master-0 kubenswrapper[7784]: I0223 13:05:58.398446 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="config-sync-controllers" Feb 23 13:05:58.398538 master-0 kubenswrapper[7784]: I0223 13:05:58.398462 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" containerName="kube-rbac-proxy" Feb 23 13:05:58.399697 master-0 kubenswrapper[7784]: I0223 13:05:58.399664 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.403608 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.403852 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.404291 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.404822 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.405056 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:05:58.405690 master-0 kubenswrapper[7784]: I0223 13:05:58.405422 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4whws" Feb 23 13:05:58.553062 master-0 kubenswrapper[7784]: I0223 13:05:58.553006 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.553221 master-0 kubenswrapper[7784]: I0223 13:05:58.553066 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9l8\" (UniqueName: \"kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.553221 master-0 kubenswrapper[7784]: I0223 13:05:58.553112 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.553221 master-0 kubenswrapper[7784]: I0223 13:05:58.553135 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.553221 master-0 kubenswrapper[7784]: I0223 13:05:58.553182 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.615556 master-0 kubenswrapper[7784]: I0223 13:05:58.615506 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrtmg"] Feb 23 13:05:58.616487 master-0 kubenswrapper[7784]: I0223 13:05:58.616467 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.620934 master-0 kubenswrapper[7784]: I0223 13:05:58.618866 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nhgb8" Feb 23 13:05:58.637755 master-0 kubenswrapper[7784]: I0223 13:05:58.637706 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrtmg"] Feb 23 13:05:58.654788 master-0 kubenswrapper[7784]: I0223 13:05:58.654728 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655066 master-0 kubenswrapper[7784]: I0223 13:05:58.655045 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655205 master-0 kubenswrapper[7784]: I0223 13:05:58.655187 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655299 master-0 kubenswrapper[7784]: I0223 13:05:58.655287 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655437 master-0 kubenswrapper[7784]: I0223 13:05:58.655415 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9l8\" (UniqueName: \"kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655676 master-0 kubenswrapper[7784]: I0223 13:05:58.655596 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.655813 master-0 kubenswrapper[7784]: I0223 13:05:58.655783 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.656248 master-0 kubenswrapper[7784]: I0223 13:05:58.656184 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.658476 master-0 kubenswrapper[7784]: I0223 13:05:58.658412 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.672628 master-0 kubenswrapper[7784]: I0223 13:05:58.672589 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9l8\" (UniqueName: \"kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.743525 master-0 kubenswrapper[7784]: I0223 13:05:58.743282 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:05:58.758538 master-0 kubenswrapper[7784]: I0223 13:05:58.757929 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xp47\" (UniqueName: \"kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.758538 master-0 kubenswrapper[7784]: I0223 13:05:58.758031 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.758538 master-0 kubenswrapper[7784]: I0223 13:05:58.758095 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.771410 master-0 kubenswrapper[7784]: I0223 13:05:58.771354 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr"] Feb 23 13:05:58.772418 master-0 kubenswrapper[7784]: I0223 13:05:58.772390 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.774720 master-0 kubenswrapper[7784]: I0223 13:05:58.774671 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:05:58.774819 master-0 kubenswrapper[7784]: I0223 13:05:58.774763 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jh64m" Feb 23 13:05:58.775096 master-0 kubenswrapper[7784]: W0223 13:05:58.775057 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b459832_b875_49a6_a7c3_253fa6c8e45a.slice/crio-279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180 WatchSource:0}: Error finding container 279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180: Status 404 returned error can't find the container with id 279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180 Feb 23 13:05:58.785721 master-0 kubenswrapper[7784]: I0223 13:05:58.782185 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr"] Feb 23 13:05:58.859670 master-0 kubenswrapper[7784]: I0223 13:05:58.859618 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xp47\" (UniqueName: \"kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.859670 master-0 kubenswrapper[7784]: I0223 13:05:58.859681 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.859942 master-0 kubenswrapper[7784]: I0223 13:05:58.859892 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.859982 master-0 kubenswrapper[7784]: I0223 13:05:58.859952 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjfj\" (UniqueName: \"kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.860170 master-0 kubenswrapper[7784]: I0223 13:05:58.860092 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.860170 master-0 kubenswrapper[7784]: I0223 13:05:58.860255 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.860170 master-0 kubenswrapper[7784]: I0223 13:05:58.860093 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.860993 master-0 kubenswrapper[7784]: I0223 13:05:58.860952 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.876147 master-0 kubenswrapper[7784]: I0223 13:05:58.876118 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xp47\" (UniqueName: \"kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.943275 master-0 kubenswrapper[7784]: I0223 13:05:58.943205 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:05:58.961654 master-0 kubenswrapper[7784]: I0223 13:05:58.961616 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.962760 master-0 kubenswrapper[7784]: I0223 13:05:58.962711 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.962828 master-0 kubenswrapper[7784]: I0223 13:05:58.962788 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjfj\" (UniqueName: \"kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.962883 master-0 kubenswrapper[7784]: I0223 13:05:58.962860 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.967546 master-0 kubenswrapper[7784]: I0223 13:05:58.967507 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:58.982329 master-0 kubenswrapper[7784]: I0223 13:05:58.982129 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjfj\" (UniqueName: \"kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:59.131951 master-0 kubenswrapper[7784]: I0223 13:05:59.131784 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:05:59.301011 master-0 kubenswrapper[7784]: I0223 13:05:59.300938 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" event={"ID":"5b459832-b875-49a6-a7c3-253fa6c8e45a","Type":"ContainerStarted","Data":"073e3a82d992740a811524c4765957bec3fbd76083b11e3fe502671def51474a"} Feb 23 13:05:59.301011 master-0 kubenswrapper[7784]: I0223 13:05:59.300997 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" event={"ID":"5b459832-b875-49a6-a7c3-253fa6c8e45a","Type":"ContainerStarted","Data":"279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180"} Feb 23 13:05:59.303832 master-0 kubenswrapper[7784]: I0223 13:05:59.303741 7784 generic.go:334] "Generic (PLEG): container finished" podID="9e0e3072-a35c-4404-891c-f31fafd0b4b1" containerID="d1c0bbd7755a5caeab64bb63934c87f9dbb896e38d0781069ba996be4781a8c9" exitCode=0 Feb 23 13:05:59.303832 master-0 kubenswrapper[7784]: I0223 13:05:59.303793 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwhpv" event={"ID":"9e0e3072-a35c-4404-891c-f31fafd0b4b1","Type":"ContainerDied","Data":"d1c0bbd7755a5caeab64bb63934c87f9dbb896e38d0781069ba996be4781a8c9"} Feb 23 13:05:59.303832 master-0 kubenswrapper[7784]: I0223 13:05:59.303822 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwhpv" event={"ID":"9e0e3072-a35c-4404-891c-f31fafd0b4b1","Type":"ContainerStarted","Data":"9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba"} Feb 23 13:05:59.306374 master-0 kubenswrapper[7784]: I0223 13:05:59.306324 7784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:05:59.394658 master-0 kubenswrapper[7784]: I0223 13:05:59.394581 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrtmg"] Feb 23 13:05:59.526541 master-0 kubenswrapper[7784]: I0223 13:05:59.526457 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd537759-1528-465f-a3bb-e56fbf4cee74" path="/var/lib/kubelet/pods/bd537759-1528-465f-a3bb-e56fbf4cee74/volumes" Feb 23 13:06:00.635527 master-0 kubenswrapper[7784]: I0223 13:06:00.635300 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-rlbcj_f348bffa-b2f6-4695-88a7-923625e7fb02/authentication-operator/0.log" Feb 23 13:06:00.832152 master-0 kubenswrapper[7784]: I0223 13:06:00.832068 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-rlbcj_f348bffa-b2f6-4695-88a7-923625e7fb02/authentication-operator/1.log" Feb 23 13:06:01.228300 master-0 kubenswrapper[7784]: I0223 13:06:01.228241 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6c65bdd8f8-vblb2_77ea2b54-bcc2-4c4e-9415-03984721b5b1/fix-audit-permissions/0.log" Feb 23 13:06:01.602105 master-0 kubenswrapper[7784]: I0223 13:06:01.602029 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6c65bdd8f8-vblb2_77ea2b54-bcc2-4c4e-9415-03984721b5b1/oauth-apiserver/0.log" Feb 23 13:06:01.632731 master-0 kubenswrapper[7784]: I0223 13:06:01.632671 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-cj6hr_4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/kube-apiserver-operator/0.log" Feb 23 13:06:02.001883 master-0 kubenswrapper[7784]: I0223 13:06:02.001766 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-cj6hr_4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/kube-apiserver-operator/1.log" Feb 23 13:06:02.028167 master-0 kubenswrapper[7784]: I0223 13:06:02.028102 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 23 13:06:02.235244 master-0 kubenswrapper[7784]: I0223 13:06:02.235181 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 23 13:06:02.430151 master-0 kubenswrapper[7784]: I0223 13:06:02.430045 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 23 13:06:02.633455 master-0 kubenswrapper[7784]: I0223 13:06:02.633362 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:06:02.714394 master-0 kubenswrapper[7784]: W0223 13:06:02.714251 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode96ce488_0099_43de_9933_425b7c981055.slice/crio-860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b WatchSource:0}: Error finding container 860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b: Status 404 returned error can't find the container with id 860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b Feb 23 13:06:02.830696 master-0 kubenswrapper[7784]: I0223 13:06:02.830649 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_4bad4fd9-074b-4a4e-8af9-50bdc4be09df/installer/0.log" Feb 23 13:06:02.912701 master-0 kubenswrapper[7784]: I0223 13:06:02.912635 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:06:03.009422 master-0 kubenswrapper[7784]: I0223 13:06:03.009311 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr"] Feb 23 13:06:03.032673 master-0 kubenswrapper[7784]: I0223 13:06:03.032594 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-hzbld_3a6b0d84-a344-43e4-b9c4-c8e0670528de/kube-controller-manager-operator/0.log" Feb 23 13:06:03.064913 master-0 kubenswrapper[7784]: W0223 13:06:03.064837 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06ccd378_23ee_49b7_a435_4b01de772155.slice/crio-f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc WatchSource:0}: Error finding container f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc: Status 404 returned error can't find the container with id f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc Feb 23 13:06:03.230154 master-0 kubenswrapper[7784]: I0223 13:06:03.230042 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-hzbld_3a6b0d84-a344-43e4-b9c4-c8e0670528de/kube-controller-manager-operator/1.log" Feb 23 13:06:03.362920 master-0 kubenswrapper[7784]: I0223 13:06:03.358712 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-kcfgf"] Feb 23 13:06:03.362920 master-0 kubenswrapper[7784]: I0223 13:06:03.359643 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.362920 master-0 kubenswrapper[7784]: I0223 13:06:03.362498 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.364373 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8"] Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.364820 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.365183 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.365316 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.365526 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:06:03.366742 master-0 kubenswrapper[7784]: I0223 13:06:03.365707 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:06:03.374221 master-0 kubenswrapper[7784]: I0223 13:06:03.372569 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t"] Feb 23 13:06:03.374221 master-0 kubenswrapper[7784]: I0223 13:06:03.372935 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:06:03.374221 master-0 kubenswrapper[7784]: I0223 13:06:03.373360 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:03.375894 master-0 kubenswrapper[7784]: I0223 13:06:03.375847 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:06:03.376140 master-0 kubenswrapper[7784]: I0223 13:06:03.376113 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t"] Feb 23 13:06:03.382116 master-0 kubenswrapper[7784]: I0223 13:06:03.382072 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8"] Feb 23 13:06:03.437322 master-0 kubenswrapper[7784]: I0223 13:06:03.437241 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 23 13:06:03.488709 master-0 kubenswrapper[7784]: I0223 13:06:03.488463 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l66s\" (UniqueName: \"kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.488709 master-0 kubenswrapper[7784]: I0223 13:06:03.488645 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.488709 master-0 kubenswrapper[7784]: I0223 13:06:03.488682 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.488709 master-0 kubenswrapper[7784]: I0223 13:06:03.488705 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.489171 master-0 kubenswrapper[7784]: I0223 13:06:03.488746 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:03.489171 master-0 kubenswrapper[7784]: I0223 13:06:03.488780 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.489171 master-0 kubenswrapper[7784]: I0223 13:06:03.488826 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpnzd\" (UniqueName: \"kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd\") pod \"network-check-source-58fb6744f5-b7dr8\" (UID: \"b12352eb-04d7-4419-b1bf-d08bca9da599\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:06:03.590486 master-0 kubenswrapper[7784]: I0223 13:06:03.590397 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.591352 master-0 kubenswrapper[7784]: I0223 13:06:03.591247 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.591526 master-0 kubenswrapper[7784]: I0223 13:06:03.591499 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.591657 master-0 kubenswrapper[7784]: I0223 13:06:03.591607 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:03.593597 master-0 kubenswrapper[7784]: I0223 13:06:03.593553 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.593834 master-0 kubenswrapper[7784]: I0223 13:06:03.593723 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.593886 master-0 kubenswrapper[7784]: I0223 13:06:03.593847 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpnzd\" (UniqueName: \"kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd\") pod \"network-check-source-58fb6744f5-b7dr8\" (UID: \"b12352eb-04d7-4419-b1bf-d08bca9da599\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:06:03.593961 master-0 kubenswrapper[7784]: I0223 13:06:03.593942 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l66s\" (UniqueName: \"kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.595028 master-0 kubenswrapper[7784]: I0223 13:06:03.594840 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.595886 master-0 kubenswrapper[7784]: I0223 13:06:03.595840 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:03.596900 master-0 kubenswrapper[7784]: I0223 13:06:03.596867 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.596984 master-0 kubenswrapper[7784]: I0223 13:06:03.596946 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.613857 master-0 kubenswrapper[7784]: I0223 13:06:03.613802 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l66s\" (UniqueName: \"kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.616305 master-0 kubenswrapper[7784]: I0223 13:06:03.616263 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpnzd\" (UniqueName: \"kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd\") pod \"network-check-source-58fb6744f5-b7dr8\" (UID: \"b12352eb-04d7-4419-b1bf-d08bca9da599\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:06:03.619721 master-0 kubenswrapper[7784]: I0223 13:06:03.619680 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" event={"ID":"5b459832-b875-49a6-a7c3-253fa6c8e45a","Type":"ContainerStarted","Data":"a9027a437a069d3e61211cec4da1b4062cd983b6c30386a19e7397862260fe63"} Feb 23 13:06:03.619795 master-0 kubenswrapper[7784]: I0223 13:06:03.619737 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" event={"ID":"5b459832-b875-49a6-a7c3-253fa6c8e45a","Type":"ContainerStarted","Data":"911881d5a2d0f3c43c47fd15a337bd82e109aecb0b9c3c50da074241070e8cf2"} Feb 23 13:06:03.622383 master-0 kubenswrapper[7784]: I0223 13:06:03.622088 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" event={"ID":"945907dd-f6b3-400f-b539-e1310eb11dd7","Type":"ContainerStarted","Data":"a83aac34f47ac3f97410a9bf5bdc4317a0c1f0c15f67f8e7e08d79f22fdcf756"} Feb 23 13:06:03.626530 master-0 kubenswrapper[7784]: I0223 13:06:03.626483 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" event={"ID":"06ccd378-23ee-49b7-a435-4b01de772155","Type":"ContainerStarted","Data":"0eae76131535712db144008e6f09a76f6820877f7474d8ce18e71486c8e31a63"} Feb 23 13:06:03.626530 master-0 kubenswrapper[7784]: I0223 13:06:03.626525 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" event={"ID":"06ccd378-23ee-49b7-a435-4b01de772155","Type":"ContainerStarted","Data":"bf735a33c90ddc7a029571644f3d6ca2aaf06bfcc626f0c073f79b0ea1dfbeda"} Feb 23 13:06:03.626667 master-0 kubenswrapper[7784]: I0223 13:06:03.626540 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" event={"ID":"06ccd378-23ee-49b7-a435-4b01de772155","Type":"ContainerStarted","Data":"f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc"} Feb 23 13:06:03.630397 master-0 kubenswrapper[7784]: I0223 13:06:03.630322 7784 generic.go:334] "Generic (PLEG): container finished" podID="e96ce488-0099-43de-9933-425b7c981055" containerID="e4825f9df6fa16b0ad9dbf9273e7e948a88d9ef3bae67e20a5a9a1b6ebc14de3" exitCode=0 Feb 23 13:06:03.630520 master-0 kubenswrapper[7784]: I0223 13:06:03.630417 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrtmg" event={"ID":"e96ce488-0099-43de-9933-425b7c981055","Type":"ContainerDied","Data":"e4825f9df6fa16b0ad9dbf9273e7e948a88d9ef3bae67e20a5a9a1b6ebc14de3"} Feb 23 13:06:03.630520 master-0 kubenswrapper[7784]: I0223 13:06:03.630461 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrtmg" event={"ID":"e96ce488-0099-43de-9933-425b7c981055","Type":"ContainerStarted","Data":"860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b"} Feb 23 13:06:03.643408 master-0 kubenswrapper[7784]: I0223 13:06:03.643314 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" podStartSLOduration=5.643290875 podStartE2EDuration="5.643290875s" podCreationTimestamp="2026-02-23 13:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:03.643276344 +0000 UTC m=+306.378130007" watchObservedRunningTime="2026-02-23 13:06:03.643290875 +0000 UTC m=+306.378144518" Feb 23 13:06:03.696771 master-0 kubenswrapper[7784]: I0223 13:06:03.696677 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:03.697243 master-0 kubenswrapper[7784]: I0223 13:06:03.697184 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" podStartSLOduration=5.6971693519999995 podStartE2EDuration="5.697169352s" podCreationTimestamp="2026-02-23 13:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:03.695237405 +0000 UTC m=+306.430091048" watchObservedRunningTime="2026-02-23 13:06:03.697169352 +0000 UTC m=+306.432022995" Feb 23 13:06:03.721066 master-0 kubenswrapper[7784]: I0223 13:06:03.719587 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:06:03.723285 master-0 kubenswrapper[7784]: I0223 13:06:03.723210 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" podStartSLOduration=13.664695824 podStartE2EDuration="22.723166377s" podCreationTimestamp="2026-02-23 13:05:41 +0000 UTC" firstStartedPulling="2026-02-23 13:05:53.798021855 +0000 UTC m=+296.532875518" lastFinishedPulling="2026-02-23 13:06:02.856492428 +0000 UTC m=+305.591346071" observedRunningTime="2026-02-23 13:06:03.719148329 +0000 UTC m=+306.454001992" watchObservedRunningTime="2026-02-23 13:06:03.723166377 +0000 UTC m=+306.458020020" Feb 23 13:06:03.739294 master-0 kubenswrapper[7784]: I0223 13:06:03.739192 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:03.798519 master-0 kubenswrapper[7784]: I0223 13:06:03.798396 7784 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:06:03.836660 master-0 kubenswrapper[7784]: I0223 13:06:03.836174 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/4.log" Feb 23 13:06:04.034764 master-0 kubenswrapper[7784]: I0223 13:06:04.034671 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 23 13:06:04.227673 master-0 kubenswrapper[7784]: I0223 13:06:04.227590 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8"] Feb 23 13:06:04.235632 master-0 kubenswrapper[7784]: I0223 13:06:04.235247 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 23 13:06:04.238915 master-0 kubenswrapper[7784]: W0223 13:06:04.238877 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb12352eb_04d7_4419_b1bf_d08bca9da599.slice/crio-d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d WatchSource:0}: Error finding container d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d: Status 404 returned error can't find the container with id d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d Feb 23 13:06:04.295718 master-0 kubenswrapper[7784]: I0223 13:06:04.295654 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t"] Feb 23 13:06:04.429915 master-0 kubenswrapper[7784]: I0223 13:06:04.429855 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 23 13:06:04.628952 master-0 kubenswrapper[7784]: I0223 13:06:04.628824 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_283fd2f4-771b-4592-a143-b7e3a5ed6765/installer/0.log" Feb 23 13:06:04.637775 master-0 kubenswrapper[7784]: I0223 13:06:04.637721 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" event={"ID":"b12352eb-04d7-4419-b1bf-d08bca9da599","Type":"ContainerStarted","Data":"27b4f088cbff9c458ae044cc4d16f64ae39945ece2c2c99950ff74ea08ad501a"} Feb 23 13:06:04.637939 master-0 kubenswrapper[7784]: I0223 13:06:04.637786 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" event={"ID":"b12352eb-04d7-4419-b1bf-d08bca9da599","Type":"ContainerStarted","Data":"d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d"} Feb 23 13:06:04.639284 master-0 kubenswrapper[7784]: I0223 13:06:04.639217 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" event={"ID":"54001c8e-cb57-47dc-8594-9daed4190bda","Type":"ContainerStarted","Data":"fce1914660e945c88d472e8a5d86bf17798d1db67260addab80c44f005293735"} Feb 23 13:06:04.640272 master-0 kubenswrapper[7784]: I0223 13:06:04.640229 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerStarted","Data":"3d07b83dc456c1a725cd00216a0076881595c484156f383050f864fdf8f89296"} Feb 23 13:06:04.659151 master-0 kubenswrapper[7784]: I0223 13:06:04.659027 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" podStartSLOduration=348.659005777 podStartE2EDuration="5m48.659005777s" podCreationTimestamp="2026-02-23 13:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:04.652267592 +0000 UTC m=+307.387121235" watchObservedRunningTime="2026-02-23 13:06:04.659005777 +0000 UTC m=+307.393859420" Feb 23 13:06:05.076520 master-0 kubenswrapper[7784]: I0223 13:06:05.076455 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-cz4nt_3daf0176-92e7-4642-8643-4afbefb77235/kube-scheduler-operator-container/0.log" Feb 23 13:06:05.086600 master-0 kubenswrapper[7784]: I0223 13:06:05.086552 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-cz4nt_3daf0176-92e7-4642-8643-4afbefb77235/kube-scheduler-operator-container/1.log" Feb 23 13:06:05.243789 master-0 kubenswrapper[7784]: I0223 13:06:05.243716 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-zh69g_7d0a976c-1492-4989-a5ff-e386564dd6ba/openshift-apiserver-operator/0.log" Feb 23 13:06:05.428612 master-0 kubenswrapper[7784]: I0223 13:06:05.428384 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-zh69g_7d0a976c-1492-4989-a5ff-e386564dd6ba/openshift-apiserver-operator/1.log" Feb 23 13:06:05.627875 master-0 kubenswrapper[7784]: I0223 13:06:05.627826 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-9f44475c9-drjp5_922e0be5-23c2-481e-89be-e918dc4ce90c/fix-audit-permissions/0.log" Feb 23 13:06:05.830653 master-0 kubenswrapper[7784]: I0223 13:06:05.830564 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-9f44475c9-drjp5_922e0be5-23c2-481e-89be-e918dc4ce90c/openshift-apiserver/0.log" Feb 23 13:06:06.028923 master-0 kubenswrapper[7784]: I0223 13:06:06.028872 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-9f44475c9-drjp5_922e0be5-23c2-481e-89be-e918dc4ce90c/openshift-apiserver-check-endpoints/0.log" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.243798 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-dk5t4_d9b02d3c-f671-4850-8c6e-315044a1376c/etcd-operator/0.log" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.247279 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-97rhg"] Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.248775 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.251407 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.251692 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.251993 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-f8dtp" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.268668 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgnr\" (UniqueName: \"kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.268721 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.273497 master-0 kubenswrapper[7784]: I0223 13:06:06.268753 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.370014 master-0 kubenswrapper[7784]: I0223 13:06:06.369949 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgnr\" (UniqueName: \"kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.370014 master-0 kubenswrapper[7784]: I0223 13:06:06.370010 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.370281 master-0 kubenswrapper[7784]: I0223 13:06:06.370034 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.373284 master-0 kubenswrapper[7784]: I0223 13:06:06.373250 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.373832 master-0 kubenswrapper[7784]: I0223 13:06:06.373744 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.438365 master-0 kubenswrapper[7784]: I0223 13:06:06.435419 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgnr\" (UniqueName: \"kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:06.450720 master-0 kubenswrapper[7784]: I0223 13:06:06.450672 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-dk5t4_d9b02d3c-f671-4850-8c6e-315044a1376c/etcd-operator/1.log" Feb 23 13:06:06.693229 master-0 kubenswrapper[7784]: I0223 13:06:06.677429 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:06:12.608023 master-0 kubenswrapper[7784]: I0223 13:06:12.607936 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:06:12.608593 master-0 kubenswrapper[7784]: I0223 13:06:12.608364 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://732718685ab155d2debc878135f1ee8c34bcbeb0432ce1b0e4f48e042448bb6f" gracePeriod=30 Feb 23 13:06:12.608593 master-0 kubenswrapper[7784]: I0223 13:06:12.608505 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://5b35e479450d4a6b40bd607a639c1e13f90b0357d49d26800e6c4e4d871bdc8e" gracePeriod=30 Feb 23 13:06:12.608593 master-0 kubenswrapper[7784]: I0223 13:06:12.608545 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://871cf7c1949a225e4e891402ce66b79bc70495b9c671e838bb0d1b8bd80d9387" gracePeriod=30 Feb 23 13:06:12.608593 master-0 kubenswrapper[7784]: I0223 13:06:12.608581 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://b7d0d3e2816f38acbe22b92b7481a331092bbc9bb66b8ca7d3c6ed43c771956e" gracePeriod=30 Feb 23 13:06:12.608714 master-0 kubenswrapper[7784]: I0223 13:06:12.608609 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://4589ad8293a096ef0dc1448f5164370fffb1afeaee6d4e435bb6f7962c78df3c" gracePeriod=30 Feb 23 13:06:12.609997 master-0 kubenswrapper[7784]: I0223 13:06:12.609917 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:06:12.610144 master-0 kubenswrapper[7784]: E0223 13:06:12.610117 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 23 13:06:12.610144 master-0 kubenswrapper[7784]: I0223 13:06:12.610135 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610147 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610155 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610164 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610169 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610179 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610185 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610198 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610204 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610213 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610219 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: E0223 13:06:12.610227 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:06:12.610240 master-0 kubenswrapper[7784]: I0223 13:06:12.610232 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:06:12.610713 master-0 kubenswrapper[7784]: E0223 13:06:12.610276 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 23 13:06:12.610713 master-0 kubenswrapper[7784]: I0223 13:06:12.610284 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 23 13:06:12.612226 master-0 kubenswrapper[7784]: I0223 13:06:12.612182 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:06:12.612302 master-0 kubenswrapper[7784]: I0223 13:06:12.612224 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:06:12.612302 master-0 kubenswrapper[7784]: I0223 13:06:12.612258 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:06:12.612302 master-0 kubenswrapper[7784]: I0223 13:06:12.612267 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:06:12.612302 master-0 kubenswrapper[7784]: I0223 13:06:12.612282 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:06:12.720482 master-0 kubenswrapper[7784]: I0223 13:06:12.720425 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.720482 master-0 kubenswrapper[7784]: I0223 13:06:12.720489 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.720751 master-0 kubenswrapper[7784]: I0223 13:06:12.720521 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.720751 master-0 kubenswrapper[7784]: I0223 13:06:12.720561 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.720751 master-0 kubenswrapper[7784]: I0223 13:06:12.720579 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.720751 master-0 kubenswrapper[7784]: I0223 13:06:12.720617 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822475 master-0 kubenswrapper[7784]: I0223 13:06:12.822420 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822475 master-0 kubenswrapper[7784]: I0223 13:06:12.822480 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822684 master-0 kubenswrapper[7784]: I0223 13:06:12.822525 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822684 master-0 kubenswrapper[7784]: I0223 13:06:12.822637 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822806 master-0 kubenswrapper[7784]: I0223 13:06:12.822720 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822806 master-0 kubenswrapper[7784]: I0223 13:06:12.822765 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822806 master-0 kubenswrapper[7784]: I0223 13:06:12.822793 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.822806 master-0 kubenswrapper[7784]: I0223 13:06:12.822791 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.823042 master-0 kubenswrapper[7784]: I0223 13:06:12.822849 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.823042 master-0 kubenswrapper[7784]: I0223 13:06:12.822864 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.823042 master-0 kubenswrapper[7784]: I0223 13:06:12.822897 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:12.823042 master-0 kubenswrapper[7784]: I0223 13:06:12.822921 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:13.732676 master-0 kubenswrapper[7784]: I0223 13:06:13.732575 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:06:13.734993 master-0 kubenswrapper[7784]: I0223 13:06:13.734965 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:06:13.737045 master-0 kubenswrapper[7784]: I0223 13:06:13.736972 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="5b35e479450d4a6b40bd607a639c1e13f90b0357d49d26800e6c4e4d871bdc8e" exitCode=2 Feb 23 13:06:13.737120 master-0 kubenswrapper[7784]: I0223 13:06:13.737047 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="871cf7c1949a225e4e891402ce66b79bc70495b9c671e838bb0d1b8bd80d9387" exitCode=0 Feb 23 13:06:13.737120 master-0 kubenswrapper[7784]: I0223 13:06:13.737061 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="b7d0d3e2816f38acbe22b92b7481a331092bbc9bb66b8ca7d3c6ed43c771956e" exitCode=2 Feb 23 13:06:18.252453 master-0 kubenswrapper[7784]: W0223 13:06:18.252406 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdad149d_da6f_49ac_85e5_deb01f161166.slice/crio-d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b WatchSource:0}: Error finding container d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b: Status 404 returned error can't find the container with id d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b Feb 23 13:06:18.770139 master-0 kubenswrapper[7784]: I0223 13:06:18.770063 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97rhg" event={"ID":"bdad149d-da6f-49ac-85e5-deb01f161166","Type":"ContainerStarted","Data":"d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b"} Feb 23 13:06:19.780872 master-0 kubenswrapper[7784]: I0223 13:06:19.780782 7784 generic.go:334] "Generic (PLEG): container finished" podID="9e0e3072-a35c-4404-891c-f31fafd0b4b1" containerID="3cd52788b3301033e468b721bd7961d3399c0e73da8a5d018cca17858544dc9b" exitCode=0 Feb 23 13:06:19.781819 master-0 kubenswrapper[7784]: I0223 13:06:19.780906 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwhpv" event={"ID":"9e0e3072-a35c-4404-891c-f31fafd0b4b1","Type":"ContainerDied","Data":"3cd52788b3301033e468b721bd7961d3399c0e73da8a5d018cca17858544dc9b"} Feb 23 13:06:19.783469 master-0 kubenswrapper[7784]: I0223 13:06:19.783332 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerStarted","Data":"269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5"} Feb 23 13:06:19.785232 master-0 kubenswrapper[7784]: I0223 13:06:19.785170 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97rhg" event={"ID":"bdad149d-da6f-49ac-85e5-deb01f161166","Type":"ContainerStarted","Data":"da1d54bde2e08c943ea7beea6ad2bb2433558eb2dae213c847d0088f7ae16b82"} Feb 23 13:06:19.788058 master-0 kubenswrapper[7784]: I0223 13:06:19.787980 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" event={"ID":"54001c8e-cb57-47dc-8594-9daed4190bda","Type":"ContainerStarted","Data":"deb50f17bfb3c50282acec6f08a12de3ea13afb081d6af65a74255576cc5d478"} Feb 23 13:06:19.788411 master-0 kubenswrapper[7784]: I0223 13:06:19.788323 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:19.792163 master-0 kubenswrapper[7784]: I0223 13:06:19.792124 7784 generic.go:334] "Generic (PLEG): container finished" podID="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" containerID="11f905712f868b255556777ca3c0a3d839f42a18d6d30988e8ef92608383064b" exitCode=0 Feb 23 13:06:19.792163 master-0 kubenswrapper[7784]: I0223 13:06:19.792158 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnmk2" event={"ID":"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3","Type":"ContainerDied","Data":"11f905712f868b255556777ca3c0a3d839f42a18d6d30988e8ef92608383064b"} Feb 23 13:06:19.797319 master-0 kubenswrapper[7784]: I0223 13:06:19.797219 7784 generic.go:334] "Generic (PLEG): container finished" podID="3a5284f9-cbb7-400b-ab39-bfef60ec198b" containerID="f09a751c8840d92654361a2adf9d69809898e2c9d64d0549e317c7e2743ed948" exitCode=0 Feb 23 13:06:19.797319 master-0 kubenswrapper[7784]: I0223 13:06:19.797300 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7wq9" event={"ID":"3a5284f9-cbb7-400b-ab39-bfef60ec198b","Type":"ContainerDied","Data":"f09a751c8840d92654361a2adf9d69809898e2c9d64d0549e317c7e2743ed948"} Feb 23 13:06:19.802166 master-0 kubenswrapper[7784]: I0223 13:06:19.802138 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:06:20.698401 master-0 kubenswrapper[7784]: I0223 13:06:20.698317 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:20.701270 master-0 kubenswrapper[7784]: I0223 13:06:20.701237 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:20.701270 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:20.701270 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:20.701270 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:20.701516 master-0 kubenswrapper[7784]: I0223 13:06:20.701490 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:21.700837 master-0 kubenswrapper[7784]: I0223 13:06:21.700750 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:21.700837 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:21.700837 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:21.700837 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:21.701524 master-0 kubenswrapper[7784]: I0223 13:06:21.700870 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:22.702297 master-0 kubenswrapper[7784]: I0223 13:06:22.701995 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:22.702297 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:22.702297 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:22.702297 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:22.702297 master-0 kubenswrapper[7784]: I0223 13:06:22.702086 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:23.698476 master-0 kubenswrapper[7784]: I0223 13:06:23.698382 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:06:23.701540 master-0 kubenswrapper[7784]: I0223 13:06:23.701489 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:23.701540 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:23.701540 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:23.701540 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:23.701843 master-0 kubenswrapper[7784]: I0223 13:06:23.701566 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:23.825837 master-0 kubenswrapper[7784]: I0223 13:06:23.825779 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnmk2" event={"ID":"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3","Type":"ContainerStarted","Data":"0ecca144d92149d528c3523758b35707a3e8f716ff3f1ac2bace574dc3968147"} Feb 23 13:06:23.830608 master-0 kubenswrapper[7784]: I0223 13:06:23.830540 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7wq9" event={"ID":"3a5284f9-cbb7-400b-ab39-bfef60ec198b","Type":"ContainerStarted","Data":"9a727786364f38ae1a5e806c01947fa533e948c946a7116bad894c7cefd6fb5a"} Feb 23 13:06:23.837903 master-0 kubenswrapper[7784]: I0223 13:06:23.834986 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrtmg" event={"ID":"e96ce488-0099-43de-9933-425b7c981055","Type":"ContainerStarted","Data":"375da7a63fe7d62490c8f3a3e0196f9371b3c8f858218ccbaf2b076eee1f97b8"} Feb 23 13:06:23.841605 master-0 kubenswrapper[7784]: I0223 13:06:23.841562 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwhpv" event={"ID":"9e0e3072-a35c-4404-891c-f31fafd0b4b1","Type":"ContainerStarted","Data":"b58617e1d421758e55da43b3bcc7a6507994da4bd9a2cb0a76e846a7fb2491c3"} Feb 23 13:06:24.701841 master-0 kubenswrapper[7784]: I0223 13:06:24.701700 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:24.701841 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:24.701841 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:24.701841 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:24.702471 master-0 kubenswrapper[7784]: I0223 13:06:24.701851 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:24.852633 master-0 kubenswrapper[7784]: I0223 13:06:24.852545 7784 generic.go:334] "Generic (PLEG): container finished" podID="e96ce488-0099-43de-9933-425b7c981055" containerID="375da7a63fe7d62490c8f3a3e0196f9371b3c8f858218ccbaf2b076eee1f97b8" exitCode=0 Feb 23 13:06:24.853434 master-0 kubenswrapper[7784]: I0223 13:06:24.852705 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrtmg" event={"ID":"e96ce488-0099-43de-9933-425b7c981055","Type":"ContainerDied","Data":"375da7a63fe7d62490c8f3a3e0196f9371b3c8f858218ccbaf2b076eee1f97b8"} Feb 23 13:06:25.702213 master-0 kubenswrapper[7784]: I0223 13:06:25.701967 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:25.702213 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:25.702213 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:25.702213 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:25.702213 master-0 kubenswrapper[7784]: I0223 13:06:25.702103 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:25.869995 master-0 kubenswrapper[7784]: I0223 13:06:25.869858 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrtmg" event={"ID":"e96ce488-0099-43de-9933-425b7c981055","Type":"ContainerStarted","Data":"a01bdf6fe6f5e8ebab00fabe7fea49d28ff3af77c8ce066b212166c20bd97424"} Feb 23 13:06:26.157724 master-0 kubenswrapper[7784]: I0223 13:06:26.157581 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:06:26.157724 master-0 kubenswrapper[7784]: I0223 13:06:26.157739 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:06:26.233797 master-0 kubenswrapper[7784]: I0223 13:06:26.233701 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:06:26.369429 master-0 kubenswrapper[7784]: I0223 13:06:26.369362 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:06:26.369783 master-0 kubenswrapper[7784]: I0223 13:06:26.369760 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:06:26.437005 master-0 kubenswrapper[7784]: I0223 13:06:26.436947 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:06:26.701980 master-0 kubenswrapper[7784]: I0223 13:06:26.701775 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:26.701980 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:26.701980 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:26.701980 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:26.702404 master-0 kubenswrapper[7784]: I0223 13:06:26.702014 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:26.885013 master-0 kubenswrapper[7784]: I0223 13:06:26.884900 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" exitCode=1 Feb 23 13:06:26.885784 master-0 kubenswrapper[7784]: I0223 13:06:26.885037 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79"} Feb 23 13:06:26.885784 master-0 kubenswrapper[7784]: I0223 13:06:26.885166 7784 scope.go:117] "RemoveContainer" containerID="708c16b81b0264d53e4f4fa259e09481e563a2bc5a1dbd63e658d7489a2758a3" Feb 23 13:06:26.889317 master-0 kubenswrapper[7784]: I0223 13:06:26.889244 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:06:26.889870 master-0 kubenswrapper[7784]: E0223 13:06:26.889800 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:06:27.701300 master-0 kubenswrapper[7784]: I0223 13:06:27.701186 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:27.701300 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:27.701300 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:27.701300 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:27.701656 master-0 kubenswrapper[7784]: I0223 13:06:27.701368 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:27.854278 master-0 kubenswrapper[7784]: I0223 13:06:27.854171 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:06:27.854856 master-0 kubenswrapper[7784]: I0223 13:06:27.854752 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:06:27.898814 master-0 kubenswrapper[7784]: I0223 13:06:27.898723 7784 generic.go:334] "Generic (PLEG): container finished" podID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerID="6cd0275f1f7db307e43c09e5b7b938a05a638192648b348b83255e2e4d8e9eb8" exitCode=0 Feb 23 13:06:27.900652 master-0 kubenswrapper[7784]: I0223 13:06:27.898878 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449","Type":"ContainerDied","Data":"6cd0275f1f7db307e43c09e5b7b938a05a638192648b348b83255e2e4d8e9eb8"} Feb 23 13:06:27.931668 master-0 kubenswrapper[7784]: I0223 13:06:27.931576 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:06:28.701627 master-0 kubenswrapper[7784]: I0223 13:06:28.701471 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:28.701627 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:28.701627 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:28.701627 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:28.702529 master-0 kubenswrapper[7784]: I0223 13:06:28.701654 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:28.914535 master-0 kubenswrapper[7784]: I0223 13:06:28.914473 7784 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" exitCode=1 Feb 23 13:06:28.915232 master-0 kubenswrapper[7784]: I0223 13:06:28.914547 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357"} Feb 23 13:06:28.915232 master-0 kubenswrapper[7784]: I0223 13:06:28.914677 7784 scope.go:117] "RemoveContainer" containerID="f49a7c31e3a171926240734ad805919af2d46930792b7ef061d645ad8ae0dac5" Feb 23 13:06:28.915811 master-0 kubenswrapper[7784]: I0223 13:06:28.915743 7784 scope.go:117] "RemoveContainer" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" Feb 23 13:06:28.916114 master-0 kubenswrapper[7784]: E0223 13:06:28.916067 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(56c3cb71c9851003c8de7e7c5db4b87e)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" Feb 23 13:06:28.944507 master-0 kubenswrapper[7784]: I0223 13:06:28.944443 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:06:28.944507 master-0 kubenswrapper[7784]: I0223 13:06:28.944501 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:06:28.981307 master-0 kubenswrapper[7784]: I0223 13:06:28.981213 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:06:29.269927 master-0 kubenswrapper[7784]: I0223 13:06:29.269875 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:06:29.398663 master-0 kubenswrapper[7784]: I0223 13:06:29.398583 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir\") pod \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " Feb 23 13:06:29.398881 master-0 kubenswrapper[7784]: I0223 13:06:29.398691 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access\") pod \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " Feb 23 13:06:29.398881 master-0 kubenswrapper[7784]: I0223 13:06:29.398770 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock\") pod \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\" (UID: \"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449\") " Feb 23 13:06:29.398881 master-0 kubenswrapper[7784]: I0223 13:06:29.398767 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" (UID: "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:29.399125 master-0 kubenswrapper[7784]: I0223 13:06:29.399028 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock" (OuterVolumeSpecName: "var-lock") pod "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" (UID: "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:29.399667 master-0 kubenswrapper[7784]: I0223 13:06:29.399627 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:29.399667 master-0 kubenswrapper[7784]: I0223 13:06:29.399660 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:29.403886 master-0 kubenswrapper[7784]: I0223 13:06:29.403828 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" (UID: "c0dfc05d-bd62-4c0c-aae4-5d1f44de9449"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:29.501438 master-0 kubenswrapper[7784]: I0223 13:06:29.501252 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0dfc05d-bd62-4c0c-aae4-5d1f44de9449-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:29.701468 master-0 kubenswrapper[7784]: I0223 13:06:29.701328 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:29.701468 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:29.701468 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:29.701468 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:29.701974 master-0 kubenswrapper[7784]: I0223 13:06:29.701480 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:29.791466 master-0 kubenswrapper[7784]: I0223 13:06:29.791201 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:06:29.792584 master-0 kubenswrapper[7784]: I0223 13:06:29.792115 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:06:29.792584 master-0 kubenswrapper[7784]: E0223 13:06:29.792401 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:06:29.926889 master-0 kubenswrapper[7784]: I0223 13:06:29.926787 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:06:29.927811 master-0 kubenswrapper[7784]: I0223 13:06:29.927360 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"c0dfc05d-bd62-4c0c-aae4-5d1f44de9449","Type":"ContainerDied","Data":"109b623b5a1ea0fcc0a5a5fd7d747c9ee8a3d9d901c40db77e82589e69041e94"} Feb 23 13:06:29.927811 master-0 kubenswrapper[7784]: I0223 13:06:29.927393 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109b623b5a1ea0fcc0a5a5fd7d747c9ee8a3d9d901c40db77e82589e69041e94" Feb 23 13:06:29.994116 master-0 kubenswrapper[7784]: I0223 13:06:29.994040 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrtmg" podUID="e96ce488-0099-43de-9933-425b7c981055" containerName="registry-server" probeResult="failure" output=< Feb 23 13:06:29.994116 master-0 kubenswrapper[7784]: timeout: failed to connect service ":50051" within 1s Feb 23 13:06:29.994116 master-0 kubenswrapper[7784]: > Feb 23 13:06:30.477891 master-0 kubenswrapper[7784]: I0223 13:06:30.477806 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:06:30.478476 master-0 kubenswrapper[7784]: I0223 13:06:30.478438 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:06:30.478742 master-0 kubenswrapper[7784]: E0223 13:06:30.478705 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:06:30.701215 master-0 kubenswrapper[7784]: I0223 13:06:30.701139 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:30.701215 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:30.701215 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:30.701215 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:30.701215 master-0 kubenswrapper[7784]: I0223 13:06:30.701228 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:31.468780 master-0 kubenswrapper[7784]: E0223 13:06:31.468652 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:31.702013 master-0 kubenswrapper[7784]: I0223 13:06:31.701914 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:31.702013 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:31.702013 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:31.702013 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:31.702518 master-0 kubenswrapper[7784]: I0223 13:06:31.702040 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:31.727460 master-0 kubenswrapper[7784]: E0223 13:06:31.727084 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:06:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:06:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:06:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:06:21Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:32.701420 master-0 kubenswrapper[7784]: I0223 13:06:32.701249 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:32.701420 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:32.701420 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:32.701420 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:32.702019 master-0 kubenswrapper[7784]: I0223 13:06:32.701475 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:33.700623 master-0 kubenswrapper[7784]: I0223 13:06:33.700550 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:33.700623 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:33.700623 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:33.700623 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:33.700983 master-0 kubenswrapper[7784]: I0223 13:06:33.700630 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:34.700415 master-0 kubenswrapper[7784]: I0223 13:06:34.700249 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:34.700415 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:34.700415 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:34.700415 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:34.700415 master-0 kubenswrapper[7784]: I0223 13:06:34.700321 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:35.701495 master-0 kubenswrapper[7784]: I0223 13:06:35.701380 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:35.701495 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:35.701495 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:35.701495 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:35.702745 master-0 kubenswrapper[7784]: I0223 13:06:35.701504 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:36.218601 master-0 kubenswrapper[7784]: I0223 13:06:36.218510 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:06:36.290111 master-0 kubenswrapper[7784]: I0223 13:06:36.290019 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:06:36.291182 master-0 kubenswrapper[7784]: I0223 13:06:36.291137 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:06:36.291664 master-0 kubenswrapper[7784]: E0223 13:06:36.291568 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:06:36.416200 master-0 kubenswrapper[7784]: I0223 13:06:36.416134 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:06:36.701199 master-0 kubenswrapper[7784]: I0223 13:06:36.701097 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:36.701199 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:36.701199 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:36.701199 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:36.701902 master-0 kubenswrapper[7784]: I0223 13:06:36.701253 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:37.701021 master-0 kubenswrapper[7784]: I0223 13:06:37.700948 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:37.701021 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:37.701021 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:37.701021 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:37.701488 master-0 kubenswrapper[7784]: I0223 13:06:37.701028 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:38.700970 master-0 kubenswrapper[7784]: I0223 13:06:38.700861 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:38.700970 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:38.700970 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:38.700970 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:38.701930 master-0 kubenswrapper[7784]: I0223 13:06:38.700990 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:39.020993 master-0 kubenswrapper[7784]: I0223 13:06:39.020906 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:06:39.080404 master-0 kubenswrapper[7784]: I0223 13:06:39.080326 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:06:39.701756 master-0 kubenswrapper[7784]: I0223 13:06:39.701663 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:39.701756 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:39.701756 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:39.701756 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:39.702278 master-0 kubenswrapper[7784]: I0223 13:06:39.701769 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:40.700518 master-0 kubenswrapper[7784]: I0223 13:06:40.700404 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:40.700518 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:40.700518 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:40.700518 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:40.701124 master-0 kubenswrapper[7784]: I0223 13:06:40.700522 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:41.469858 master-0 kubenswrapper[7784]: E0223 13:06:41.469740 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:41.515262 master-0 kubenswrapper[7784]: I0223 13:06:41.515195 7784 scope.go:117] "RemoveContainer" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" Feb 23 13:06:41.700593 master-0 kubenswrapper[7784]: I0223 13:06:41.700524 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:41.700593 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:41.700593 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:41.700593 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:41.700900 master-0 kubenswrapper[7784]: I0223 13:06:41.700602 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:41.727775 master-0 kubenswrapper[7784]: E0223 13:06:41.727672 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:42.006065 master-0 kubenswrapper[7784]: I0223 13:06:42.005864 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb"} Feb 23 13:06:42.700911 master-0 kubenswrapper[7784]: I0223 13:06:42.700825 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:42.700911 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:42.700911 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:42.700911 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:42.700911 master-0 kubenswrapper[7784]: I0223 13:06:42.700906 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:43.015613 master-0 kubenswrapper[7784]: I0223 13:06:43.015529 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:06:43.017040 master-0 kubenswrapper[7784]: I0223 13:06:43.016989 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:06:43.017835 master-0 kubenswrapper[7784]: I0223 13:06:43.017795 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 23 13:06:43.018420 master-0 kubenswrapper[7784]: I0223 13:06:43.018367 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 23 13:06:43.020317 master-0 kubenswrapper[7784]: I0223 13:06:43.020235 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="4589ad8293a096ef0dc1448f5164370fffb1afeaee6d4e435bb6f7962c78df3c" exitCode=137 Feb 23 13:06:43.020317 master-0 kubenswrapper[7784]: I0223 13:06:43.020293 7784 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="732718685ab155d2debc878135f1ee8c34bcbeb0432ce1b0e4f48e042448bb6f" exitCode=137 Feb 23 13:06:43.169510 master-0 kubenswrapper[7784]: I0223 13:06:43.169446 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:06:43.170651 master-0 kubenswrapper[7784]: I0223 13:06:43.170611 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:06:43.171938 master-0 kubenswrapper[7784]: I0223 13:06:43.171845 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 23 13:06:43.173045 master-0 kubenswrapper[7784]: I0223 13:06:43.172993 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 23 13:06:43.174796 master-0 kubenswrapper[7784]: I0223 13:06:43.174737 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:43.313854 master-0 kubenswrapper[7784]: I0223 13:06:43.313669 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.313854 master-0 kubenswrapper[7784]: I0223 13:06:43.313836 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313831 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313880 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313927 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313936 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313978 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.313979 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.314035 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314175 master-0 kubenswrapper[7784]: I0223 13:06:43.314120 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314205 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314290 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314686 7784 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314711 7784 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314730 7784 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314749 7784 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.314773 master-0 kubenswrapper[7784]: I0223 13:06:43.314765 7784 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.315219 master-0 kubenswrapper[7784]: I0223 13:06:43.314782 7784 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.527165 master-0 kubenswrapper[7784]: I0223 13:06:43.527086 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 23 13:06:43.700709 master-0 kubenswrapper[7784]: I0223 13:06:43.700505 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:43.700709 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:43.700709 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:43.700709 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:43.700709 master-0 kubenswrapper[7784]: I0223 13:06:43.700587 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:44.027460 master-0 kubenswrapper[7784]: I0223 13:06:44.027388 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:06:44.028554 master-0 kubenswrapper[7784]: I0223 13:06:44.028498 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:06:44.029445 master-0 kubenswrapper[7784]: I0223 13:06:44.029398 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 23 13:06:44.029907 master-0 kubenswrapper[7784]: I0223 13:06:44.029860 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 23 13:06:44.031997 master-0 kubenswrapper[7784]: I0223 13:06:44.031957 7784 scope.go:117] "RemoveContainer" containerID="5b35e479450d4a6b40bd607a639c1e13f90b0357d49d26800e6c4e4d871bdc8e" Feb 23 13:06:44.032105 master-0 kubenswrapper[7784]: I0223 13:06:44.032063 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:44.050683 master-0 kubenswrapper[7784]: I0223 13:06:44.050602 7784 scope.go:117] "RemoveContainer" containerID="871cf7c1949a225e4e891402ce66b79bc70495b9c671e838bb0d1b8bd80d9387" Feb 23 13:06:44.071329 master-0 kubenswrapper[7784]: I0223 13:06:44.071265 7784 scope.go:117] "RemoveContainer" containerID="b7d0d3e2816f38acbe22b92b7481a331092bbc9bb66b8ca7d3c6ed43c771956e" Feb 23 13:06:44.092452 master-0 kubenswrapper[7784]: I0223 13:06:44.092398 7784 scope.go:117] "RemoveContainer" containerID="4589ad8293a096ef0dc1448f5164370fffb1afeaee6d4e435bb6f7962c78df3c" Feb 23 13:06:44.112208 master-0 kubenswrapper[7784]: I0223 13:06:44.112159 7784 scope.go:117] "RemoveContainer" containerID="732718685ab155d2debc878135f1ee8c34bcbeb0432ce1b0e4f48e042448bb6f" Feb 23 13:06:44.131817 master-0 kubenswrapper[7784]: I0223 13:06:44.131764 7784 scope.go:117] "RemoveContainer" containerID="54831b6236c4e39cb7ee1d061f9dcd71b81fd654a26ceb27bb0db7808c016243" Feb 23 13:06:44.151118 master-0 kubenswrapper[7784]: I0223 13:06:44.151020 7784 scope.go:117] "RemoveContainer" containerID="5f2912a7aba95d3d6fab8df1a73bdd941d5cec4d910c0279136faf5966960607" Feb 23 13:06:44.180360 master-0 kubenswrapper[7784]: I0223 13:06:44.180307 7784 scope.go:117] "RemoveContainer" containerID="7ccd6c0f7b7169060efbd69b89d31fe78ead24ef11ec518fb0a078ce9f74b4ec" Feb 23 13:06:44.701516 master-0 kubenswrapper[7784]: I0223 13:06:44.701396 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:44.701516 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:44.701516 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:44.701516 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:44.701516 master-0 kubenswrapper[7784]: I0223 13:06:44.701510 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:45.700858 master-0 kubenswrapper[7784]: I0223 13:06:45.700777 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:45.700858 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:45.700858 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:45.700858 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:45.701156 master-0 kubenswrapper[7784]: I0223 13:06:45.700872 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:46.699848 master-0 kubenswrapper[7784]: I0223 13:06:46.699785 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:46.699848 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:46.699848 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:46.699848 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:46.700474 master-0 kubenswrapper[7784]: I0223 13:06:46.699862 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:47.701319 master-0 kubenswrapper[7784]: I0223 13:06:47.701241 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:47.701319 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:47.701319 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:47.701319 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:47.702262 master-0 kubenswrapper[7784]: I0223 13:06:47.701331 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:48.700514 master-0 kubenswrapper[7784]: I0223 13:06:48.700331 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:48.700514 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:48.700514 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:48.700514 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:48.700514 master-0 kubenswrapper[7784]: I0223 13:06:48.700478 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:49.515967 master-0 kubenswrapper[7784]: I0223 13:06:49.515887 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:06:49.516582 master-0 kubenswrapper[7784]: E0223 13:06:49.516420 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:06:49.701615 master-0 kubenswrapper[7784]: I0223 13:06:49.701510 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:49.701615 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:49.701615 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:49.701615 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:49.701615 master-0 kubenswrapper[7784]: I0223 13:06:49.701608 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:50.700711 master-0 kubenswrapper[7784]: I0223 13:06:50.700621 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:50.700711 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:50.700711 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:50.700711 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:50.701247 master-0 kubenswrapper[7784]: I0223 13:06:50.700720 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:51.470898 master-0 kubenswrapper[7784]: E0223 13:06:51.470769 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:51.701718 master-0 kubenswrapper[7784]: I0223 13:06:51.701632 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:51.701718 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:51.701718 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:51.701718 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:51.702651 master-0 kubenswrapper[7784]: I0223 13:06:51.701745 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:51.728600 master-0 kubenswrapper[7784]: E0223 13:06:51.728456 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:06:51.901892 master-0 kubenswrapper[7784]: E0223 13:06:51.901704 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-vnmk2.1896e1fc792be630 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-vnmk2,UID:8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3,APIVersion:v1,ResourceVersion:9868,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 20.625s (20.626s including waiting). Image size: 1237042376 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:17.897797168 +0000 UTC m=+320.632650831,LastTimestamp:2026-02-23 13:06:17.897797168 +0000 UTC m=+320.632650831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:06:52.701984 master-0 kubenswrapper[7784]: I0223 13:06:52.701863 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:52.701984 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:52.701984 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:52.701984 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:52.703255 master-0 kubenswrapper[7784]: I0223 13:06:52.701985 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:53.702137 master-0 kubenswrapper[7784]: I0223 13:06:53.702067 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:53.702137 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:53.702137 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:53.702137 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:53.702851 master-0 kubenswrapper[7784]: I0223 13:06:53.702145 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:54.514329 master-0 kubenswrapper[7784]: I0223 13:06:54.514222 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:54.554932 master-0 kubenswrapper[7784]: I0223 13:06:54.554861 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:06:54.554932 master-0 kubenswrapper[7784]: I0223 13:06:54.554909 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:06:54.702665 master-0 kubenswrapper[7784]: I0223 13:06:54.702589 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:54.702665 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:54.702665 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:54.702665 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:54.703797 master-0 kubenswrapper[7784]: I0223 13:06:54.703747 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:55.702001 master-0 kubenswrapper[7784]: I0223 13:06:55.701886 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:55.702001 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:55.702001 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:55.702001 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:55.702613 master-0 kubenswrapper[7784]: I0223 13:06:55.702022 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:56.701117 master-0 kubenswrapper[7784]: I0223 13:06:56.701035 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:56.701117 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:56.701117 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:56.701117 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:56.701939 master-0 kubenswrapper[7784]: I0223 13:06:56.701145 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:57.701064 master-0 kubenswrapper[7784]: I0223 13:06:57.700908 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:57.701064 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:57.701064 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:57.701064 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:57.702280 master-0 kubenswrapper[7784]: I0223 13:06:57.701093 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:58.701320 master-0 kubenswrapper[7784]: I0223 13:06:58.701200 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:58.701320 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:58.701320 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:58.701320 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:58.702363 master-0 kubenswrapper[7784]: I0223 13:06:58.701333 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:06:59.702419 master-0 kubenswrapper[7784]: I0223 13:06:59.702314 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:06:59.702419 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:06:59.702419 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:06:59.702419 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:06:59.703294 master-0 kubenswrapper[7784]: I0223 13:06:59.702440 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:00.701581 master-0 kubenswrapper[7784]: I0223 13:07:00.701491 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:00.701581 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:00.701581 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:00.701581 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:00.702260 master-0 kubenswrapper[7784]: I0223 13:07:00.701610 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:01.472130 master-0 kubenswrapper[7784]: E0223 13:07:01.472028 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:01.701579 master-0 kubenswrapper[7784]: I0223 13:07:01.701486 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:01.701579 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:01.701579 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:01.701579 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:01.702191 master-0 kubenswrapper[7784]: I0223 13:07:01.701599 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:01.729896 master-0 kubenswrapper[7784]: E0223 13:07:01.729671 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:02.597685 master-0 kubenswrapper[7784]: E0223 13:07:02.597553 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[package-server-manager-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" podUID="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Feb 23 13:07:02.701951 master-0 kubenswrapper[7784]: I0223 13:07:02.701851 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:02.701951 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:02.701951 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:02.701951 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:02.701951 master-0 kubenswrapper[7784]: I0223 13:07:02.701943 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:03.515592 master-0 kubenswrapper[7784]: I0223 13:07:03.515516 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:07:03.516115 master-0 kubenswrapper[7784]: E0223 13:07:03.516059 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:07:03.701631 master-0 kubenswrapper[7784]: I0223 13:07:03.701531 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:03.701631 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:03.701631 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:03.701631 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:03.702689 master-0 kubenswrapper[7784]: I0223 13:07:03.701628 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:04.701124 master-0 kubenswrapper[7784]: I0223 13:07:04.701031 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:04.701124 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:04.701124 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:04.701124 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:04.701579 master-0 kubenswrapper[7784]: I0223 13:07:04.701137 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:05.701967 master-0 kubenswrapper[7784]: I0223 13:07:05.701895 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:05.701967 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:05.701967 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:05.701967 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:05.702822 master-0 kubenswrapper[7784]: I0223 13:07:05.702785 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:06.701157 master-0 kubenswrapper[7784]: I0223 13:07:06.701080 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:06.701157 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:06.701157 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:06.701157 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:06.701549 master-0 kubenswrapper[7784]: I0223 13:07:06.701522 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:07.701466 master-0 kubenswrapper[7784]: I0223 13:07:07.701245 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:07.701466 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:07.701466 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:07.701466 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:07.702448 master-0 kubenswrapper[7784]: I0223 13:07:07.701448 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:08.701751 master-0 kubenswrapper[7784]: I0223 13:07:08.701630 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:08.701751 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:08.701751 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:08.701751 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:08.702865 master-0 kubenswrapper[7784]: I0223 13:07:08.701791 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:09.702466 master-0 kubenswrapper[7784]: I0223 13:07:09.702242 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:09.702466 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:09.702466 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:09.702466 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:09.703784 master-0 kubenswrapper[7784]: I0223 13:07:09.702512 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:10.480072 master-0 kubenswrapper[7784]: I0223 13:07:10.479970 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:07:10.486752 master-0 kubenswrapper[7784]: I0223 13:07:10.486650 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:07:10.701427 master-0 kubenswrapper[7784]: I0223 13:07:10.701292 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:10.701427 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:10.701427 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:10.701427 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:10.701983 master-0 kubenswrapper[7784]: I0223 13:07:10.701479 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:11.473115 master-0 kubenswrapper[7784]: E0223 13:07:11.472985 7784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:11.473115 master-0 kubenswrapper[7784]: I0223 13:07:11.473104 7784 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:07:11.700370 master-0 kubenswrapper[7784]: I0223 13:07:11.700273 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:11.700370 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:11.700370 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:11.700370 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:11.700653 master-0 kubenswrapper[7784]: I0223 13:07:11.700387 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:11.730050 master-0 kubenswrapper[7784]: E0223 13:07:11.729947 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 23 13:07:11.730050 master-0 kubenswrapper[7784]: E0223 13:07:11.730000 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:07:12.701934 master-0 kubenswrapper[7784]: I0223 13:07:12.701392 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:12.701934 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:12.701934 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:12.701934 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:12.701934 master-0 kubenswrapper[7784]: I0223 13:07:12.701705 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:13.701513 master-0 kubenswrapper[7784]: I0223 13:07:13.701368 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:13.701513 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:13.701513 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:13.701513 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:13.702870 master-0 kubenswrapper[7784]: I0223 13:07:13.701518 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:14.702051 master-0 kubenswrapper[7784]: I0223 13:07:14.701923 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:14.702051 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:14.702051 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:14.702051 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:14.703238 master-0 kubenswrapper[7784]: I0223 13:07:14.702067 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:15.701687 master-0 kubenswrapper[7784]: I0223 13:07:15.701538 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:15.701687 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:15.701687 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:15.701687 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:15.702575 master-0 kubenswrapper[7784]: I0223 13:07:15.701691 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:16.702282 master-0 kubenswrapper[7784]: I0223 13:07:16.702144 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:16.702282 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:16.702282 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:16.702282 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:16.703308 master-0 kubenswrapper[7784]: I0223 13:07:16.702308 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:17.515005 master-0 kubenswrapper[7784]: I0223 13:07:17.514893 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:07:17.515460 master-0 kubenswrapper[7784]: I0223 13:07:17.515310 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:07:17.516093 master-0 kubenswrapper[7784]: I0223 13:07:17.515968 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:07:17.701441 master-0 kubenswrapper[7784]: I0223 13:07:17.701307 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:17.701441 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:17.701441 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:17.701441 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:17.701909 master-0 kubenswrapper[7784]: I0223 13:07:17.701467 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:18.344722 master-0 kubenswrapper[7784]: I0223 13:07:18.344607 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd"} Feb 23 13:07:18.701538 master-0 kubenswrapper[7784]: I0223 13:07:18.701312 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:18.701538 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:18.701538 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:18.701538 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:18.701538 master-0 kubenswrapper[7784]: I0223 13:07:18.701456 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:19.701851 master-0 kubenswrapper[7784]: I0223 13:07:19.701734 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:19.701851 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:19.701851 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:19.701851 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:19.702968 master-0 kubenswrapper[7784]: I0223 13:07:19.701868 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:19.783171 master-0 kubenswrapper[7784]: I0223 13:07:19.782996 7784 status_manager.go:851] "Failed to get status for pod" podUID="9e0e3072-a35c-4404-891c-f31fafd0b4b1" pod="openshift-marketplace/redhat-marketplace-vwhpv" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-vwhpv)" Feb 23 13:07:20.477678 master-0 kubenswrapper[7784]: I0223 13:07:20.477554 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:07:20.702721 master-0 kubenswrapper[7784]: I0223 13:07:20.702618 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:20.702721 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:20.702721 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:20.702721 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:20.703936 master-0 kubenswrapper[7784]: I0223 13:07:20.702757 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:21.473849 master-0 kubenswrapper[7784]: E0223 13:07:21.473701 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 23 13:07:21.701658 master-0 kubenswrapper[7784]: I0223 13:07:21.701576 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:21.701658 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:21.701658 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:21.701658 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:21.702316 master-0 kubenswrapper[7784]: I0223 13:07:21.702259 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:22.702260 master-0 kubenswrapper[7784]: I0223 13:07:22.702150 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:22.702260 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:22.702260 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:22.702260 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:22.703117 master-0 kubenswrapper[7784]: I0223 13:07:22.702303 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:23.701714 master-0 kubenswrapper[7784]: I0223 13:07:23.701619 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:23.701714 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:23.701714 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:23.701714 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:23.702618 master-0 kubenswrapper[7784]: I0223 13:07:23.702517 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:24.701666 master-0 kubenswrapper[7784]: I0223 13:07:24.701557 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:24.701666 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:24.701666 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:24.701666 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:24.702259 master-0 kubenswrapper[7784]: I0223 13:07:24.701693 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:25.702296 master-0 kubenswrapper[7784]: I0223 13:07:25.701741 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:25.702296 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:25.702296 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:25.702296 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:25.702296 master-0 kubenswrapper[7784]: I0223 13:07:25.701917 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:25.906297 master-0 kubenswrapper[7784]: E0223 13:07:25.906017 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{prometheus-operator-admission-webhook-75d56db95f-ld22t.1896e1fc8dd82680 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:prometheus-operator-admission-webhook-75d56db95f-ld22t,UID:54001c8e-cb57-47dc-8594-9daed4190bda,APIVersion:v1,ResourceVersion:10146,FieldPath:spec.containers{prometheus-operator-admission-webhook},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350\" in 13.934s (13.934s including waiting). Image size: 444471741 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.244630144 +0000 UTC m=+320.979483817,LastTimestamp:2026-02-23 13:06:18.244630144 +0000 UTC m=+320.979483817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:07:26.289861 master-0 kubenswrapper[7784]: I0223 13:07:26.289752 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:07:26.701653 master-0 kubenswrapper[7784]: I0223 13:07:26.701423 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:26.701653 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:26.701653 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:26.701653 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:26.701653 master-0 kubenswrapper[7784]: I0223 13:07:26.701552 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:27.702121 master-0 kubenswrapper[7784]: I0223 13:07:27.701934 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:27.702121 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:27.702121 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:27.702121 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:27.702121 master-0 kubenswrapper[7784]: I0223 13:07:27.702086 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:28.558497 master-0 kubenswrapper[7784]: E0223 13:07:28.558425 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:07:28.559695 master-0 kubenswrapper[7784]: I0223 13:07:28.559663 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:07:28.599669 master-0 kubenswrapper[7784]: W0223 13:07:28.599557 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391 WatchSource:0}: Error finding container 047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391: Status 404 returned error can't find the container with id 047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391 Feb 23 13:07:28.700756 master-0 kubenswrapper[7784]: I0223 13:07:28.700563 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:28.700756 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:28.700756 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:28.700756 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:28.700756 master-0 kubenswrapper[7784]: I0223 13:07:28.700700 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:29.290986 master-0 kubenswrapper[7784]: I0223 13:07:29.290833 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:29.451970 master-0 kubenswrapper[7784]: I0223 13:07:29.451847 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/1.log" Feb 23 13:07:29.454215 master-0 kubenswrapper[7784]: I0223 13:07:29.454131 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/0.log" Feb 23 13:07:29.454426 master-0 kubenswrapper[7784]: I0223 13:07:29.454236 7784 generic.go:334] "Generic (PLEG): container finished" podID="878aa813-a8b9-4a6f-8086-778df276d0d7" containerID="2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e" exitCode=1 Feb 23 13:07:29.454426 master-0 kubenswrapper[7784]: I0223 13:07:29.454319 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerDied","Data":"2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e"} Feb 23 13:07:29.454587 master-0 kubenswrapper[7784]: I0223 13:07:29.454495 7784 scope.go:117] "RemoveContainer" containerID="c8289f028a5b9b2ff9bd84ee035e05cf3ab1f61b8019dd41bc447fe370637ef6" Feb 23 13:07:29.455612 master-0 kubenswrapper[7784]: I0223 13:07:29.455545 7784 scope.go:117] "RemoveContainer" containerID="2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e" Feb 23 13:07:29.456099 master-0 kubenswrapper[7784]: E0223 13:07:29.456032 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-k9h69_openshift-ingress-operator(878aa813-a8b9-4a6f-8086-778df276d0d7)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" podUID="878aa813-a8b9-4a6f-8086-778df276d0d7" Feb 23 13:07:29.459098 master-0 kubenswrapper[7784]: I0223 13:07:29.458954 7784 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="82ee2b0499ab490936bc4f01ea0b261f0a05bd8f2beb37ede0c37988900d3cbd" exitCode=0 Feb 23 13:07:29.459098 master-0 kubenswrapper[7784]: I0223 13:07:29.459037 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"82ee2b0499ab490936bc4f01ea0b261f0a05bd8f2beb37ede0c37988900d3cbd"} Feb 23 13:07:29.459540 master-0 kubenswrapper[7784]: I0223 13:07:29.459124 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391"} Feb 23 13:07:29.459635 master-0 kubenswrapper[7784]: I0223 13:07:29.459594 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:07:29.459635 master-0 kubenswrapper[7784]: I0223 13:07:29.459628 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:07:29.702026 master-0 kubenswrapper[7784]: I0223 13:07:29.701848 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:29.702026 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:29.702026 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:29.702026 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:29.702331 master-0 kubenswrapper[7784]: I0223 13:07:29.702008 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:30.471173 master-0 kubenswrapper[7784]: I0223 13:07:30.470890 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/1.log" Feb 23 13:07:30.703081 master-0 kubenswrapper[7784]: I0223 13:07:30.702979 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:30.703081 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:30.703081 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:30.703081 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:30.703626 master-0 kubenswrapper[7784]: I0223 13:07:30.703123 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:31.675064 master-0 kubenswrapper[7784]: E0223 13:07:31.674899 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 23 13:07:31.701787 master-0 kubenswrapper[7784]: I0223 13:07:31.701696 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:31.701787 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:31.701787 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:31.701787 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:31.702253 master-0 kubenswrapper[7784]: I0223 13:07:31.701825 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:32.040281 master-0 kubenswrapper[7784]: E0223 13:07:32.039989 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:07:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:07:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:07:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:07:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:32.701941 master-0 kubenswrapper[7784]: I0223 13:07:32.701825 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:32.701941 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:32.701941 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:32.701941 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:32.701941 master-0 kubenswrapper[7784]: I0223 13:07:32.701918 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:33.701927 master-0 kubenswrapper[7784]: I0223 13:07:33.701794 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:33.701927 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:33.701927 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:33.701927 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:33.701927 master-0 kubenswrapper[7784]: I0223 13:07:33.701924 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:34.702583 master-0 kubenswrapper[7784]: I0223 13:07:34.702479 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:34.702583 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:34.702583 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:34.702583 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:34.703691 master-0 kubenswrapper[7784]: I0223 13:07:34.702610 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:35.700458 master-0 kubenswrapper[7784]: I0223 13:07:35.700385 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:35.700458 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:35.700458 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:35.700458 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:35.700954 master-0 kubenswrapper[7784]: I0223 13:07:35.700495 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:36.700413 master-0 kubenswrapper[7784]: I0223 13:07:36.700308 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:36.700413 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:36.700413 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:36.700413 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:36.701073 master-0 kubenswrapper[7784]: I0223 13:07:36.700421 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:37.702285 master-0 kubenswrapper[7784]: I0223 13:07:37.702170 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:37.702285 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:37.702285 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:37.702285 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:37.703324 master-0 kubenswrapper[7784]: I0223 13:07:37.702284 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:38.702570 master-0 kubenswrapper[7784]: I0223 13:07:38.702441 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:38.702570 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:38.702570 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:38.702570 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:38.703793 master-0 kubenswrapper[7784]: I0223 13:07:38.702602 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:39.291137 master-0 kubenswrapper[7784]: I0223 13:07:39.291011 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:39.701682 master-0 kubenswrapper[7784]: I0223 13:07:39.701497 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:39.701682 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:39.701682 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:39.701682 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:39.701682 master-0 kubenswrapper[7784]: I0223 13:07:39.701633 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:40.701649 master-0 kubenswrapper[7784]: I0223 13:07:40.701512 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:40.701649 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:40.701649 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:40.701649 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:40.702741 master-0 kubenswrapper[7784]: I0223 13:07:40.701719 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:41.702218 master-0 kubenswrapper[7784]: I0223 13:07:41.702091 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:41.702218 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:41.702218 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:41.702218 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:41.703206 master-0 kubenswrapper[7784]: I0223 13:07:41.702217 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:42.040663 master-0 kubenswrapper[7784]: E0223 13:07:42.040558 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:42.076299 master-0 kubenswrapper[7784]: E0223 13:07:42.076197 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 23 13:07:42.703028 master-0 kubenswrapper[7784]: I0223 13:07:42.702911 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:42.703028 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:42.703028 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:42.703028 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:42.704184 master-0 kubenswrapper[7784]: I0223 13:07:42.703023 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:43.515128 master-0 kubenswrapper[7784]: I0223 13:07:43.515059 7784 scope.go:117] "RemoveContainer" containerID="2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e" Feb 23 13:07:43.701432 master-0 kubenswrapper[7784]: I0223 13:07:43.701360 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:43.701432 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:43.701432 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:43.701432 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:43.701809 master-0 kubenswrapper[7784]: I0223 13:07:43.701441 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:44.580598 master-0 kubenswrapper[7784]: I0223 13:07:44.580506 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/1.log" Feb 23 13:07:44.581968 master-0 kubenswrapper[7784]: I0223 13:07:44.581302 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774"} Feb 23 13:07:44.701460 master-0 kubenswrapper[7784]: I0223 13:07:44.701310 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:44.701460 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:44.701460 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:44.701460 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:44.702032 master-0 kubenswrapper[7784]: I0223 13:07:44.701460 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:45.703443 master-0 kubenswrapper[7784]: I0223 13:07:45.702593 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:45.703443 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:45.703443 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:45.703443 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:45.703443 master-0 kubenswrapper[7784]: I0223 13:07:45.702700 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:46.701407 master-0 kubenswrapper[7784]: I0223 13:07:46.701189 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:46.701407 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:46.701407 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:46.701407 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:46.701407 master-0 kubenswrapper[7784]: I0223 13:07:46.701393 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:47.700794 master-0 kubenswrapper[7784]: I0223 13:07:47.700719 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:47.700794 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:47.700794 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:47.700794 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:47.701740 master-0 kubenswrapper[7784]: I0223 13:07:47.701523 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:48.702582 master-0 kubenswrapper[7784]: I0223 13:07:48.702516 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:48.702582 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:48.702582 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:48.702582 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:48.703694 master-0 kubenswrapper[7784]: I0223 13:07:48.703583 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:49.290218 master-0 kubenswrapper[7784]: I0223 13:07:49.290079 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:49.290218 master-0 kubenswrapper[7784]: I0223 13:07:49.290218 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:07:49.291134 master-0 kubenswrapper[7784]: I0223 13:07:49.291072 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 13:07:49.291251 master-0 kubenswrapper[7784]: I0223 13:07:49.291191 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd" gracePeriod=30 Feb 23 13:07:49.626131 master-0 kubenswrapper[7784]: I0223 13:07:49.625885 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd" exitCode=2 Feb 23 13:07:49.626131 master-0 kubenswrapper[7784]: I0223 13:07:49.625980 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd"} Feb 23 13:07:49.626131 master-0 kubenswrapper[7784]: I0223 13:07:49.626092 7784 scope.go:117] "RemoveContainer" containerID="8a89fa6b8628f8a7f9fc8e50fe91e299d2c85709f7621b232ab51535aa86fd79" Feb 23 13:07:49.701390 master-0 kubenswrapper[7784]: I0223 13:07:49.701268 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:49.701390 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:49.701390 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:49.701390 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:49.701916 master-0 kubenswrapper[7784]: I0223 13:07:49.701423 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:50.637183 master-0 kubenswrapper[7784]: I0223 13:07:50.637103 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390"} Feb 23 13:07:50.702030 master-0 kubenswrapper[7784]: I0223 13:07:50.701896 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:50.702030 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:50.702030 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:50.702030 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:50.702030 master-0 kubenswrapper[7784]: I0223 13:07:50.702027 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:51.701941 master-0 kubenswrapper[7784]: I0223 13:07:51.701820 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:51.701941 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:51.701941 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:51.701941 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:51.701941 master-0 kubenswrapper[7784]: I0223 13:07:51.701927 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:52.041387 master-0 kubenswrapper[7784]: E0223 13:07:52.041274 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:52.701998 master-0 kubenswrapper[7784]: I0223 13:07:52.701851 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:52.701998 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:52.701998 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:52.701998 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:52.703321 master-0 kubenswrapper[7784]: I0223 13:07:52.702025 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:52.878305 master-0 kubenswrapper[7784]: E0223 13:07:52.878128 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 23 13:07:53.700976 master-0 kubenswrapper[7784]: I0223 13:07:53.700860 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:53.700976 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:53.700976 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:53.700976 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:53.700976 master-0 kubenswrapper[7784]: I0223 13:07:53.700978 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:54.702582 master-0 kubenswrapper[7784]: I0223 13:07:54.702461 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:54.702582 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:54.702582 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:54.702582 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:54.702582 master-0 kubenswrapper[7784]: I0223 13:07:54.702575 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:55.702257 master-0 kubenswrapper[7784]: I0223 13:07:55.702155 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:55.702257 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:55.702257 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:55.702257 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:55.702624 master-0 kubenswrapper[7784]: I0223 13:07:55.702298 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:56.289300 master-0 kubenswrapper[7784]: I0223 13:07:56.289169 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:07:56.701932 master-0 kubenswrapper[7784]: I0223 13:07:56.701711 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:56.701932 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:56.701932 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:56.701932 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:56.701932 master-0 kubenswrapper[7784]: I0223 13:07:56.701868 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:57.709042 master-0 kubenswrapper[7784]: I0223 13:07:57.708941 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:57.709042 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:57.709042 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:57.709042 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:57.709042 master-0 kubenswrapper[7784]: I0223 13:07:57.709024 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:58.700693 master-0 kubenswrapper[7784]: I0223 13:07:58.700594 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:58.700693 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:58.700693 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:58.700693 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:58.701107 master-0 kubenswrapper[7784]: I0223 13:07:58.700725 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:59.289564 master-0 kubenswrapper[7784]: I0223 13:07:59.289335 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:59.700973 master-0 kubenswrapper[7784]: I0223 13:07:59.700919 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:07:59.700973 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:07:59.700973 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:07:59.700973 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:07:59.701386 master-0 kubenswrapper[7784]: I0223 13:07:59.701323 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:07:59.909838 master-0 kubenswrapper[7784]: E0223 13:07:59.909608 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-vwhpv.1896e1fc8dd95340 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-vwhpv,UID:9e0e3072-a35c-4404-891c-f31fafd0b4b1,APIVersion:v1,ResourceVersion:9979,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 18.938s (18.938s including waiting). Image size: 1202767548 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.244707136 +0000 UTC m=+320.979560809,LastTimestamp:2026-02-23 13:06:18.244707136 +0000 UTC m=+320.979560809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:08:00.477994 master-0 kubenswrapper[7784]: I0223 13:08:00.477861 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:08:00.565560 master-0 kubenswrapper[7784]: I0223 13:08:00.565495 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:08:00.565560 master-0 kubenswrapper[7784]: I0223 13:08:00.565554 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:08:00.566154 master-0 kubenswrapper[7784]: I0223 13:08:00.566010 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:08:00.566724 master-0 kubenswrapper[7784]: I0223 13:08:00.566193 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:08:00.700232 master-0 kubenswrapper[7784]: I0223 13:08:00.700143 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:00.700232 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:00.700232 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:00.700232 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:00.700232 master-0 kubenswrapper[7784]: I0223 13:08:00.700218 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:00.712792 master-0 kubenswrapper[7784]: I0223 13:08:00.712713 7784 generic.go:334] "Generic (PLEG): container finished" podID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerID="d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255" exitCode=0 Feb 23 13:08:00.712792 master-0 kubenswrapper[7784]: I0223 13:08:00.712783 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerDied","Data":"d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255"} Feb 23 13:08:00.713090 master-0 kubenswrapper[7784]: I0223 13:08:00.712866 7784 scope.go:117] "RemoveContainer" containerID="e4ed838542af022eb9712b2516ce0b1c3c0ca74d3f39f916a6f32d58ec0e24c3" Feb 23 13:08:00.713578 master-0 kubenswrapper[7784]: I0223 13:08:00.713517 7784 scope.go:117] "RemoveContainer" containerID="d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255" Feb 23 13:08:00.713847 master-0 kubenswrapper[7784]: E0223 13:08:00.713797 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-588zk_openshift-marketplace(35e97ed9-695d-483e-8878-4f231c79f1d2)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" Feb 23 13:08:01.701559 master-0 kubenswrapper[7784]: I0223 13:08:01.701465 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:01.701559 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:01.701559 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:01.701559 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:01.702230 master-0 kubenswrapper[7784]: I0223 13:08:01.701587 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:02.042667 master-0 kubenswrapper[7784]: E0223 13:08:02.042569 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:02.702226 master-0 kubenswrapper[7784]: I0223 13:08:02.702116 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:02.702226 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:02.702226 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:02.702226 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:02.702226 master-0 kubenswrapper[7784]: I0223 13:08:02.702219 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:03.463907 master-0 kubenswrapper[7784]: E0223 13:08:03.463828 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:08:03.700934 master-0 kubenswrapper[7784]: I0223 13:08:03.700861 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:03.700934 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:03.700934 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:03.700934 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:03.701291 master-0 kubenswrapper[7784]: I0223 13:08:03.700970 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:04.479562 master-0 kubenswrapper[7784]: E0223 13:08:04.479479 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 23 13:08:04.700648 master-0 kubenswrapper[7784]: I0223 13:08:04.700539 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:04.700648 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:04.700648 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:04.700648 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:04.700896 master-0 kubenswrapper[7784]: I0223 13:08:04.700682 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:04.747392 master-0 kubenswrapper[7784]: I0223 13:08:04.747112 7784 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="2e255bf3b7625705d5275336ecc4f0432c73e0f5b8fc01e16c0951117a71d88c" exitCode=0 Feb 23 13:08:04.747392 master-0 kubenswrapper[7784]: I0223 13:08:04.747281 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"2e255bf3b7625705d5275336ecc4f0432c73e0f5b8fc01e16c0951117a71d88c"} Feb 23 13:08:04.748056 master-0 kubenswrapper[7784]: I0223 13:08:04.748001 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:08:04.748056 master-0 kubenswrapper[7784]: I0223 13:08:04.748033 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:08:04.750185 master-0 kubenswrapper[7784]: I0223 13:08:04.750103 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/1.log" Feb 23 13:08:04.751181 master-0 kubenswrapper[7784]: I0223 13:08:04.751106 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/0.log" Feb 23 13:08:04.751388 master-0 kubenswrapper[7784]: I0223 13:08:04.751196 7784 generic.go:334] "Generic (PLEG): container finished" podID="5793184d-de96-49ad-a060-0fa0cf278a9c" containerID="d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23" exitCode=1 Feb 23 13:08:04.751388 master-0 kubenswrapper[7784]: I0223 13:08:04.751256 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerDied","Data":"d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23"} Feb 23 13:08:04.751388 master-0 kubenswrapper[7784]: I0223 13:08:04.751329 7784 scope.go:117] "RemoveContainer" containerID="8dbfb3a49d15de4419fc29dce0193ff2a8f2f1238053d11c98101bb8a51adb15" Feb 23 13:08:04.752180 master-0 kubenswrapper[7784]: I0223 13:08:04.752124 7784 scope.go:117] "RemoveContainer" containerID="d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23" Feb 23 13:08:04.752396 master-0 kubenswrapper[7784]: E0223 13:08:04.752361 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:08:05.701841 master-0 kubenswrapper[7784]: I0223 13:08:05.701737 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:05.701841 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:05.701841 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:05.701841 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:05.703078 master-0 kubenswrapper[7784]: I0223 13:08:05.701854 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:05.759606 master-0 kubenswrapper[7784]: I0223 13:08:05.759534 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/1.log" Feb 23 13:08:06.701460 master-0 kubenswrapper[7784]: I0223 13:08:06.701156 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:06.701460 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:06.701460 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:06.701460 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:06.701899 master-0 kubenswrapper[7784]: I0223 13:08:06.701484 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:07.700727 master-0 kubenswrapper[7784]: I0223 13:08:07.700642 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:07.700727 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:07.700727 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:07.700727 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:07.701132 master-0 kubenswrapper[7784]: I0223 13:08:07.700744 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:08.701427 master-0 kubenswrapper[7784]: I0223 13:08:08.701314 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:08.701427 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:08.701427 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:08.701427 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:08.701427 master-0 kubenswrapper[7784]: I0223 13:08:08.701427 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:09.290194 master-0 kubenswrapper[7784]: I0223 13:08:09.290085 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:09.701293 master-0 kubenswrapper[7784]: I0223 13:08:09.701038 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:09.701293 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:09.701293 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:09.701293 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:09.701293 master-0 kubenswrapper[7784]: I0223 13:08:09.701201 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:10.565294 master-0 kubenswrapper[7784]: I0223 13:08:10.565087 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:08:10.565294 master-0 kubenswrapper[7784]: I0223 13:08:10.565222 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:08:10.566215 master-0 kubenswrapper[7784]: I0223 13:08:10.566166 7784 scope.go:117] "RemoveContainer" containerID="d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255" Feb 23 13:08:10.701923 master-0 kubenswrapper[7784]: I0223 13:08:10.701823 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:10.701923 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:10.701923 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:10.701923 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:10.702829 master-0 kubenswrapper[7784]: I0223 13:08:10.701960 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:10.798019 master-0 kubenswrapper[7784]: I0223 13:08:10.797896 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" event={"ID":"35e97ed9-695d-483e-8878-4f231c79f1d2","Type":"ContainerStarted","Data":"7ab0fa22823377f52b8cbc1e456fd01a54d81fd757847be04d19223083a66183"} Feb 23 13:08:10.798563 master-0 kubenswrapper[7784]: I0223 13:08:10.798463 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:08:10.800795 master-0 kubenswrapper[7784]: I0223 13:08:10.800701 7784 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-588zk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 23 13:08:10.800936 master-0 kubenswrapper[7784]: I0223 13:08:10.800844 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" podUID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 23 13:08:11.702307 master-0 kubenswrapper[7784]: I0223 13:08:11.702183 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:11.702307 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:11.702307 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:11.702307 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:11.703020 master-0 kubenswrapper[7784]: I0223 13:08:11.702397 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:11.811710 master-0 kubenswrapper[7784]: I0223 13:08:11.811631 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:08:12.043940 master-0 kubenswrapper[7784]: E0223 13:08:12.043879 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:12.044265 master-0 kubenswrapper[7784]: E0223 13:08:12.044236 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:12.702459 master-0 kubenswrapper[7784]: I0223 13:08:12.702335 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:12.702459 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:12.702459 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:12.702459 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:12.703742 master-0 kubenswrapper[7784]: I0223 13:08:12.703613 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:13.700122 master-0 kubenswrapper[7784]: I0223 13:08:13.700073 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:13.700122 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:13.700122 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:13.700122 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:13.700665 master-0 kubenswrapper[7784]: I0223 13:08:13.700603 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:14.700654 master-0 kubenswrapper[7784]: I0223 13:08:14.700555 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:14.700654 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:14.700654 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:14.700654 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:14.701302 master-0 kubenswrapper[7784]: I0223 13:08:14.700675 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:15.702564 master-0 kubenswrapper[7784]: I0223 13:08:15.702405 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:15.702564 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:15.702564 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:15.702564 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:15.702564 master-0 kubenswrapper[7784]: I0223 13:08:15.702555 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:16.467839 master-0 kubenswrapper[7784]: I0223 13:08:16.467680 7784 patch_prober.go:28] interesting pod/machine-config-daemon-q8bjq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:08:16.467839 master-0 kubenswrapper[7784]: I0223 13:08:16.467793 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:08:16.515913 master-0 kubenswrapper[7784]: I0223 13:08:16.515797 7784 scope.go:117] "RemoveContainer" containerID="d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23" Feb 23 13:08:16.701614 master-0 kubenswrapper[7784]: I0223 13:08:16.701499 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:16.701614 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:16.701614 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:16.701614 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:16.701614 master-0 kubenswrapper[7784]: I0223 13:08:16.701600 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:16.846327 master-0 kubenswrapper[7784]: I0223 13:08:16.846248 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/1.log" Feb 23 13:08:16.846959 master-0 kubenswrapper[7784]: I0223 13:08:16.846428 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0"} Feb 23 13:08:17.681069 master-0 kubenswrapper[7784]: E0223 13:08:17.680968 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 23 13:08:17.702369 master-0 kubenswrapper[7784]: I0223 13:08:17.702242 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:17.702369 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:17.702369 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:17.702369 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:17.702696 master-0 kubenswrapper[7784]: I0223 13:08:17.702439 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: E0223 13:08:18.287850 7784 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903" Netns:"/var/run/netns/0f1b8129-fe67-4077-a263-6708cfe09d79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: > Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: E0223 13:08:18.287936 7784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903" Netns:"/var/run/netns/0f1b8129-fe67-4077-a263-6708cfe09d79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:08:18.287947 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:08:18.288588 master-0 kubenswrapper[7784]: E0223 13:08:18.287958 7784 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:08:18.288588 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903" Netns:"/var/run/netns/0f1b8129-fe67-4077-a263-6708cfe09d79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 23 13:08:18.288588 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:08:18.288588 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:08:18.288588 master-0 kubenswrapper[7784]: E0223 13:08:18.288022 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903\\\" Netns:\\\"/var/run/netns/0f1b8129-fe67-4077-a263-6708cfe09d79\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=469f1794b8e342e7ada1509cf5f492dc4403b3768460a9d2dc5ac99621b19903;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" podUID="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Feb 23 13:08:18.701298 master-0 kubenswrapper[7784]: I0223 13:08:18.701079 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:18.701298 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:18.701298 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:18.701298 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:18.701298 master-0 kubenswrapper[7784]: I0223 13:08:18.701183 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:19.290917 master-0 kubenswrapper[7784]: I0223 13:08:19.290797 7784 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:19.291684 master-0 kubenswrapper[7784]: I0223 13:08:19.290961 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:08:19.291684 master-0 kubenswrapper[7784]: I0223 13:08:19.291658 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 13:08:19.291775 master-0 kubenswrapper[7784]: I0223 13:08:19.291723 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" gracePeriod=30 Feb 23 13:08:19.409672 master-0 kubenswrapper[7784]: E0223 13:08:19.409603 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: I0223 13:08:19.700950 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: I0223 13:08:19.701062 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:08:19.701145 master-0 kubenswrapper[7784]: I0223 13:08:19.701129 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:08:19.701924 master-0 kubenswrapper[7784]: I0223 13:08:19.701867 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5"} pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" containerMessage="Container router failed startup probe, will be restarted" Feb 23 13:08:19.702003 master-0 kubenswrapper[7784]: I0223 13:08:19.701935 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" containerID="cri-o://269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5" gracePeriod=3600 Feb 23 13:08:19.797607 master-0 kubenswrapper[7784]: I0223 13:08:19.797482 7784 status_manager.go:851] "Failed to get status for pod" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 23 13:08:19.869857 master-0 kubenswrapper[7784]: I0223 13:08:19.869781 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" exitCode=2 Feb 23 13:08:19.869857 master-0 kubenswrapper[7784]: I0223 13:08:19.869838 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390"} Feb 23 13:08:19.870305 master-0 kubenswrapper[7784]: I0223 13:08:19.869891 7784 scope.go:117] "RemoveContainer" containerID="46d6784fe671f2c68a10b213a05d1645131c37e8e0e3bbefbef25989eca152bd" Feb 23 13:08:19.870687 master-0 kubenswrapper[7784]: I0223 13:08:19.870627 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:08:19.871076 master-0 kubenswrapper[7784]: E0223 13:08:19.871023 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:08:29.789705 master-0 kubenswrapper[7784]: I0223 13:08:29.789595 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:08:29.790625 master-0 kubenswrapper[7784]: I0223 13:08:29.790574 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:08:29.791108 master-0 kubenswrapper[7784]: E0223 13:08:29.791051 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:08:31.514960 master-0 kubenswrapper[7784]: I0223 13:08:31.514766 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:08:31.516054 master-0 kubenswrapper[7784]: I0223 13:08:31.515754 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:08:32.273892 master-0 kubenswrapper[7784]: E0223 13:08:32.273515 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:33.913635 master-0 kubenswrapper[7784]: E0223 13:08:33.913400 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-w7wq9.1896e1fc8dda9c52 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-w7wq9,UID:3a5284f9-cbb7-400b-ab39-bfef60ec198b,APIVersion:v1,ResourceVersion:9877,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 19.952s (19.952s including waiting). Image size: 1210455233 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.244791378 +0000 UTC m=+320.979645071,LastTimestamp:2026-02-23 13:06:18.244791378 +0000 UTC m=+320.979645071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:08:34.082985 master-0 kubenswrapper[7784]: E0223 13:08:34.082866 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:08:38.751855 master-0 kubenswrapper[7784]: E0223 13:08:38.751770 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:08:39.039280 master-0 kubenswrapper[7784]: I0223 13:08:39.039222 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerDied","Data":"a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775"} Feb 23 13:08:39.039466 master-0 kubenswrapper[7784]: I0223 13:08:39.039304 7784 scope.go:117] "RemoveContainer" containerID="2187448d7b4208e3e1befa756c107826cc44935cd19819e30170d5f0d754f882" Feb 23 13:08:39.039466 master-0 kubenswrapper[7784]: I0223 13:08:39.039235 7784 generic.go:334] "Generic (PLEG): container finished" podID="d7c80f4d-6b28-44f4-beef-01e705260452" containerID="a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775" exitCode=0 Feb 23 13:08:39.040106 master-0 kubenswrapper[7784]: I0223 13:08:39.040064 7784 scope.go:117] "RemoveContainer" containerID="a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775" Feb 23 13:08:39.040417 master-0 kubenswrapper[7784]: E0223 13:08:39.040379 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-5d8dfcdc87-7hpbz_openshift-ovn-kubernetes(d7c80f4d-6b28-44f4-beef-01e705260452)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" podUID="d7c80f4d-6b28-44f4-beef-01e705260452" Feb 23 13:08:39.041994 master-0 kubenswrapper[7784]: I0223 13:08:39.041951 7784 generic.go:334] "Generic (PLEG): container finished" podID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" exitCode=0 Feb 23 13:08:39.041994 master-0 kubenswrapper[7784]: I0223 13:08:39.041986 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerDied","Data":"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9"} Feb 23 13:08:39.042473 master-0 kubenswrapper[7784]: I0223 13:08:39.042436 7784 scope.go:117] "RemoveContainer" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:08:39.042743 master-0 kubenswrapper[7784]: E0223 13:08:39.042699 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-69f44bb786-4zj6n_openshift-controller-manager(d7c61886-6cc7-44aa-b56a-81cdcc670993)\"" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" Feb 23 13:08:39.071975 master-0 kubenswrapper[7784]: I0223 13:08:39.071886 7784 scope.go:117] "RemoveContainer" containerID="12929995bc4c469f6a1c977c1403bda9305b2b652d95308c622e2a38faae5fab" Feb 23 13:08:40.055183 master-0 kubenswrapper[7784]: I0223 13:08:40.055073 7784 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="d2d864a84369989b9d11cae33199d20743ba17dbbbd9594567b6e432600359d1" exitCode=0 Feb 23 13:08:40.055183 master-0 kubenswrapper[7784]: I0223 13:08:40.055161 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"d2d864a84369989b9d11cae33199d20743ba17dbbbd9594567b6e432600359d1"} Feb 23 13:08:40.057162 master-0 kubenswrapper[7784]: I0223 13:08:40.055564 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:08:40.057162 master-0 kubenswrapper[7784]: I0223 13:08:40.055585 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:08:41.019699 master-0 kubenswrapper[7784]: I0223 13:08:41.019612 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:08:41.019699 master-0 kubenswrapper[7784]: I0223 13:08:41.019714 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:08:41.020638 master-0 kubenswrapper[7784]: I0223 13:08:41.020592 7784 scope.go:117] "RemoveContainer" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:08:41.021039 master-0 kubenswrapper[7784]: E0223 13:08:41.020987 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-69f44bb786-4zj6n_openshift-controller-manager(d7c61886-6cc7-44aa-b56a-81cdcc670993)\"" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" Feb 23 13:08:41.515604 master-0 kubenswrapper[7784]: I0223 13:08:41.515489 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:08:41.516702 master-0 kubenswrapper[7784]: E0223 13:08:41.515977 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:08:42.074595 master-0 kubenswrapper[7784]: I0223 13:08:42.074461 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/cluster-baremetal-operator/0.log" Feb 23 13:08:42.074595 master-0 kubenswrapper[7784]: I0223 13:08:42.074549 7784 generic.go:334] "Generic (PLEG): container finished" podID="5ede583b-44b0-42af-92c9-f7b8938f7843" containerID="9b6793307745f6a85fc70df6b4de715b7748d6182b66009e926d2209513a5af3" exitCode=1 Feb 23 13:08:42.074595 master-0 kubenswrapper[7784]: I0223 13:08:42.074589 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" event={"ID":"5ede583b-44b0-42af-92c9-f7b8938f7843","Type":"ContainerDied","Data":"9b6793307745f6a85fc70df6b4de715b7748d6182b66009e926d2209513a5af3"} Feb 23 13:08:42.075198 master-0 kubenswrapper[7784]: I0223 13:08:42.075161 7784 scope.go:117] "RemoveContainer" containerID="9b6793307745f6a85fc70df6b4de715b7748d6182b66009e926d2209513a5af3" Feb 23 13:08:42.275022 master-0 kubenswrapper[7784]: E0223 13:08:42.274923 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:43.091159 master-0 kubenswrapper[7784]: I0223 13:08:43.091078 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/cluster-baremetal-operator/0.log" Feb 23 13:08:43.092042 master-0 kubenswrapper[7784]: I0223 13:08:43.091172 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" event={"ID":"5ede583b-44b0-42af-92c9-f7b8938f7843","Type":"ContainerStarted","Data":"6f3121c259224663d6ec144fe798e33fc572c39104ca4f8f4e2af6c94570e3bd"} Feb 23 13:08:45.112755 master-0 kubenswrapper[7784]: I0223 13:08:45.112645 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-z7jgz_0d134032-1c35-4b69-9336-bcdc9c1cb87d/machine-approver-controller/0.log" Feb 23 13:08:45.113659 master-0 kubenswrapper[7784]: I0223 13:08:45.113595 7784 generic.go:334] "Generic (PLEG): container finished" podID="0d134032-1c35-4b69-9336-bcdc9c1cb87d" containerID="49844090cc1129b2d843c2317ee9aa9edebd16f2ac5c94c083315778ac1b8f03" exitCode=255 Feb 23 13:08:45.113735 master-0 kubenswrapper[7784]: I0223 13:08:45.113670 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" event={"ID":"0d134032-1c35-4b69-9336-bcdc9c1cb87d","Type":"ContainerDied","Data":"49844090cc1129b2d843c2317ee9aa9edebd16f2ac5c94c083315778ac1b8f03"} Feb 23 13:08:45.114714 master-0 kubenswrapper[7784]: I0223 13:08:45.114665 7784 scope.go:117] "RemoveContainer" containerID="49844090cc1129b2d843c2317ee9aa9edebd16f2ac5c94c083315778ac1b8f03" Feb 23 13:08:46.121331 master-0 kubenswrapper[7784]: I0223 13:08:46.121268 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-z7jgz_0d134032-1c35-4b69-9336-bcdc9c1cb87d/machine-approver-controller/0.log" Feb 23 13:08:46.122283 master-0 kubenswrapper[7784]: I0223 13:08:46.122248 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" event={"ID":"0d134032-1c35-4b69-9336-bcdc9c1cb87d","Type":"ContainerStarted","Data":"bbc281da4660415ce9090f6d3d19beac11652de16a8530b22a07e133d4134274"} Feb 23 13:08:46.467979 master-0 kubenswrapper[7784]: I0223 13:08:46.467809 7784 patch_prober.go:28] interesting pod/machine-config-daemon-q8bjq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:08:46.467979 master-0 kubenswrapper[7784]: I0223 13:08:46.467911 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:08:47.133273 master-0 kubenswrapper[7784]: I0223 13:08:47.133172 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/2.log" Feb 23 13:08:47.134806 master-0 kubenswrapper[7784]: I0223 13:08:47.134730 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/1.log" Feb 23 13:08:47.134942 master-0 kubenswrapper[7784]: I0223 13:08:47.134848 7784 generic.go:334] "Generic (PLEG): container finished" podID="5793184d-de96-49ad-a060-0fa0cf278a9c" containerID="ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0" exitCode=1 Feb 23 13:08:47.135025 master-0 kubenswrapper[7784]: I0223 13:08:47.134956 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerDied","Data":"ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0"} Feb 23 13:08:47.135103 master-0 kubenswrapper[7784]: I0223 13:08:47.135073 7784 scope.go:117] "RemoveContainer" containerID="d8c76aaec2e18c0f4ce6428d119d5e5d091d7dfe1971812eecc67daa115b3a23" Feb 23 13:08:47.136148 master-0 kubenswrapper[7784]: I0223 13:08:47.136085 7784 scope.go:117] "RemoveContainer" containerID="ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0" Feb 23 13:08:47.136582 master-0 kubenswrapper[7784]: E0223 13:08:47.136543 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:08:48.145144 master-0 kubenswrapper[7784]: I0223 13:08:48.144975 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/2.log" Feb 23 13:08:51.084858 master-0 kubenswrapper[7784]: E0223 13:08:51.084698 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:08:51.516394 master-0 kubenswrapper[7784]: I0223 13:08:51.516236 7784 scope.go:117] "RemoveContainer" containerID="a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775" Feb 23 13:08:52.178648 master-0 kubenswrapper[7784]: I0223 13:08:52.178525 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" event={"ID":"d7c80f4d-6b28-44f4-beef-01e705260452","Type":"ContainerStarted","Data":"e2a8f1dc59eaebccbfcc39b78241689147104336d408ad08ef77da047398ea84"} Feb 23 13:08:52.275795 master-0 kubenswrapper[7784]: E0223 13:08:52.275659 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:52.515667 master-0 kubenswrapper[7784]: I0223 13:08:52.515575 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:08:52.516045 master-0 kubenswrapper[7784]: E0223 13:08:52.515988 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:08:56.514608 master-0 kubenswrapper[7784]: I0223 13:08:56.514530 7784 scope.go:117] "RemoveContainer" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:08:57.217184 master-0 kubenswrapper[7784]: I0223 13:08:57.217070 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerStarted","Data":"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247"} Feb 23 13:08:57.217618 master-0 kubenswrapper[7784]: I0223 13:08:57.217546 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:08:57.226651 master-0 kubenswrapper[7784]: I0223 13:08:57.226594 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:08:59.236242 master-0 kubenswrapper[7784]: I0223 13:08:59.236178 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-pqjsm_e5802841-52dc-4d15-a252-0eac70e9fbbc/control-plane-machine-set-operator/0.log" Feb 23 13:08:59.236242 master-0 kubenswrapper[7784]: I0223 13:08:59.236247 7784 generic.go:334] "Generic (PLEG): container finished" podID="e5802841-52dc-4d15-a252-0eac70e9fbbc" containerID="a0b82533ef8a23dd50bebab82c0ca8db95bf68be3db11bf32c9c3702f2b24d95" exitCode=1 Feb 23 13:08:59.236853 master-0 kubenswrapper[7784]: I0223 13:08:59.236450 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" event={"ID":"e5802841-52dc-4d15-a252-0eac70e9fbbc","Type":"ContainerDied","Data":"a0b82533ef8a23dd50bebab82c0ca8db95bf68be3db11bf32c9c3702f2b24d95"} Feb 23 13:08:59.238193 master-0 kubenswrapper[7784]: I0223 13:08:59.238129 7784 scope.go:117] "RemoveContainer" containerID="a0b82533ef8a23dd50bebab82c0ca8db95bf68be3db11bf32c9c3702f2b24d95" Feb 23 13:08:59.515646 master-0 kubenswrapper[7784]: I0223 13:08:59.515602 7784 scope.go:117] "RemoveContainer" containerID="ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0" Feb 23 13:08:59.516116 master-0 kubenswrapper[7784]: E0223 13:08:59.516092 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:09:00.245605 master-0 kubenswrapper[7784]: I0223 13:09:00.245490 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-pqjsm_e5802841-52dc-4d15-a252-0eac70e9fbbc/control-plane-machine-set-operator/0.log" Feb 23 13:09:00.245605 master-0 kubenswrapper[7784]: I0223 13:09:00.245567 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" event={"ID":"e5802841-52dc-4d15-a252-0eac70e9fbbc","Type":"ContainerStarted","Data":"bdb24f225047b4d39bb2ee64389d356daa04af8f7293faf82c195837c0eb9364"} Feb 23 13:09:02.276986 master-0 kubenswrapper[7784]: E0223 13:09:02.276883 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:09:03.514916 master-0 kubenswrapper[7784]: I0223 13:09:03.514830 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:09:03.519085 master-0 kubenswrapper[7784]: E0223 13:09:03.515264 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:09:07.298847 master-0 kubenswrapper[7784]: I0223 13:09:07.298748 7784 generic.go:334] "Generic (PLEG): container finished" podID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerID="269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5" exitCode=0 Feb 23 13:09:07.298847 master-0 kubenswrapper[7784]: I0223 13:09:07.298820 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerDied","Data":"269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5"} Feb 23 13:09:07.298847 master-0 kubenswrapper[7784]: I0223 13:09:07.298860 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerStarted","Data":"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88"} Feb 23 13:09:07.698737 master-0 kubenswrapper[7784]: I0223 13:09:07.698313 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:09:07.702029 master-0 kubenswrapper[7784]: I0223 13:09:07.701953 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:07.702029 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:07.702029 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:07.702029 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:07.702311 master-0 kubenswrapper[7784]: I0223 13:09:07.702058 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:07.916703 master-0 kubenswrapper[7784]: E0223 13:09:07.916531 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-7b65dc9fcb-kcfgf.1896e1fc8e1705e5 openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7b65dc9fcb-kcfgf,UID:73ba4f16-0217-4bf1-8fc2-6b385eda0771,APIVersion:v1,ResourceVersion:10144,FieldPath:spec.containers{router},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\" in 14.493s (14.493s including waiting). Image size: 487054953 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.248750565 +0000 UTC m=+320.983604198,LastTimestamp:2026-02-23 13:06:18.248750565 +0000 UTC m=+320.983604198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:09:08.086637 master-0 kubenswrapper[7784]: E0223 13:09:08.086517 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:09:08.701836 master-0 kubenswrapper[7784]: I0223 13:09:08.701710 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:08.701836 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:08.701836 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:08.701836 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:08.701836 master-0 kubenswrapper[7784]: I0223 13:09:08.701829 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:09.702686 master-0 kubenswrapper[7784]: I0223 13:09:09.702507 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:09.702686 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:09.702686 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:09.702686 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:09.703759 master-0 kubenswrapper[7784]: I0223 13:09:09.702691 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:10.701037 master-0 kubenswrapper[7784]: I0223 13:09:10.700925 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:10.701037 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:10.701037 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:10.701037 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:10.701037 master-0 kubenswrapper[7784]: I0223 13:09:10.701029 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:11.701791 master-0 kubenswrapper[7784]: I0223 13:09:11.701697 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:11.701791 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:11.701791 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:11.701791 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:11.701791 master-0 kubenswrapper[7784]: I0223 13:09:11.701795 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:12.278012 master-0 kubenswrapper[7784]: E0223 13:09:12.277871 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:09:12.278012 master-0 kubenswrapper[7784]: E0223 13:09:12.277927 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:12.701276 master-0 kubenswrapper[7784]: I0223 13:09:12.701092 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:12.701276 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:12.701276 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:12.701276 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:12.701276 master-0 kubenswrapper[7784]: I0223 13:09:12.701213 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:13.516041 master-0 kubenswrapper[7784]: I0223 13:09:13.515832 7784 scope.go:117] "RemoveContainer" containerID="ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0" Feb 23 13:09:13.698203 master-0 kubenswrapper[7784]: I0223 13:09:13.698094 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:09:13.702333 master-0 kubenswrapper[7784]: I0223 13:09:13.701648 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:13.702333 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:13.702333 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:13.702333 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:13.702333 master-0 kubenswrapper[7784]: I0223 13:09:13.701756 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:14.059721 master-0 kubenswrapper[7784]: E0223 13:09:14.059432 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:14.359781 master-0 kubenswrapper[7784]: I0223 13:09:14.359745 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/2.log" Feb 23 13:09:14.360091 master-0 kubenswrapper[7784]: I0223 13:09:14.360065 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b"} Feb 23 13:09:14.702210 master-0 kubenswrapper[7784]: I0223 13:09:14.701903 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:14.702210 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:14.702210 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:14.702210 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:14.702210 master-0 kubenswrapper[7784]: I0223 13:09:14.702067 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:15.385966 master-0 kubenswrapper[7784]: I0223 13:09:15.385891 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"50a58d0ce7c894ef63637afa134bce96bf1f006a6bab4ac3f1ecd56d9d50fb4c"} Feb 23 13:09:15.385966 master-0 kubenswrapper[7784]: I0223 13:09:15.385966 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"890a57877b523717f0581cb46be8e5e3ffe6e394eb800ce34a2eccd8a9ed9c26"} Feb 23 13:09:15.386187 master-0 kubenswrapper[7784]: I0223 13:09:15.385988 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"d240edda552a194cc839e519a3ef6f597dac970b89cedcf002a5ca19e1dccea4"} Feb 23 13:09:15.702152 master-0 kubenswrapper[7784]: I0223 13:09:15.702050 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:15.702152 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:15.702152 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:15.702152 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:15.702152 master-0 kubenswrapper[7784]: I0223 13:09:15.702130 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:16.400444 master-0 kubenswrapper[7784]: I0223 13:09:16.400318 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"42c072ce8addb2af8e54c2540d3d7bf94b7e1c11c8b5b3516735dd9bc3b16010"} Feb 23 13:09:16.400444 master-0 kubenswrapper[7784]: I0223 13:09:16.400405 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"4f359c660906c23bf880e9290f1f8922442dc88b951f15e1fd0f3a2beaf307ff"} Feb 23 13:09:16.400894 master-0 kubenswrapper[7784]: I0223 13:09:16.400743 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:09:16.400894 master-0 kubenswrapper[7784]: I0223 13:09:16.400759 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:09:16.468252 master-0 kubenswrapper[7784]: I0223 13:09:16.468105 7784 patch_prober.go:28] interesting pod/machine-config-daemon-q8bjq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:09:16.468666 master-0 kubenswrapper[7784]: I0223 13:09:16.468256 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:09:16.468666 master-0 kubenswrapper[7784]: I0223 13:09:16.468373 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:09:16.469696 master-0 kubenswrapper[7784]: I0223 13:09:16.469630 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685"} pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:09:16.469811 master-0 kubenswrapper[7784]: I0223 13:09:16.469755 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" containerID="cri-o://dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685" gracePeriod=600 Feb 23 13:09:16.701489 master-0 kubenswrapper[7784]: I0223 13:09:16.701376 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:16.701489 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:16.701489 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:16.701489 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:16.701899 master-0 kubenswrapper[7784]: I0223 13:09:16.701517 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:17.412650 master-0 kubenswrapper[7784]: I0223 13:09:17.412434 7784 generic.go:334] "Generic (PLEG): container finished" podID="57803492-e1dd-4994-8330-1e9b393d54fd" containerID="dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685" exitCode=0 Feb 23 13:09:17.412650 master-0 kubenswrapper[7784]: I0223 13:09:17.412539 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" event={"ID":"57803492-e1dd-4994-8330-1e9b393d54fd","Type":"ContainerDied","Data":"dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685"} Feb 23 13:09:17.412650 master-0 kubenswrapper[7784]: I0223 13:09:17.412595 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" event={"ID":"57803492-e1dd-4994-8330-1e9b393d54fd","Type":"ContainerStarted","Data":"a16db1ea9dd61b6de30509c12a7aa33cf53b5eb7d802387f4cc40bd3fefa3279"} Feb 23 13:09:17.514553 master-0 kubenswrapper[7784]: I0223 13:09:17.514453 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:09:17.515040 master-0 kubenswrapper[7784]: E0223 13:09:17.514740 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:09:17.701598 master-0 kubenswrapper[7784]: I0223 13:09:17.701282 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:17.701598 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:17.701598 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:17.701598 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:17.701598 master-0 kubenswrapper[7784]: I0223 13:09:17.701523 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:18.560503 master-0 kubenswrapper[7784]: I0223 13:09:18.560424 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:18.560503 master-0 kubenswrapper[7784]: I0223 13:09:18.560479 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:18.701448 master-0 kubenswrapper[7784]: I0223 13:09:18.701376 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:18.701448 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:18.701448 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:18.701448 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:18.701894 master-0 kubenswrapper[7784]: I0223 13:09:18.701454 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:19.701796 master-0 kubenswrapper[7784]: I0223 13:09:19.701707 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:19.701796 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:19.701796 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:19.701796 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:19.702559 master-0 kubenswrapper[7784]: I0223 13:09:19.701840 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:19.800179 master-0 kubenswrapper[7784]: I0223 13:09:19.800051 7784 status_manager.go:851] "Failed to get status for pod" podUID="3a5284f9-cbb7-400b-ab39-bfef60ec198b" pod="openshift-marketplace/community-operators-w7wq9" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods community-operators-w7wq9)" Feb 23 13:09:20.702675 master-0 kubenswrapper[7784]: I0223 13:09:20.702543 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:20.702675 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:20.702675 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:20.702675 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:20.703741 master-0 kubenswrapper[7784]: I0223 13:09:20.702684 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:21.701223 master-0 kubenswrapper[7784]: I0223 13:09:21.701091 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:21.701223 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:21.701223 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:21.701223 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:21.701754 master-0 kubenswrapper[7784]: I0223 13:09:21.701258 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:22.701934 master-0 kubenswrapper[7784]: I0223 13:09:22.701842 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:22.701934 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:22.701934 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:22.701934 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:22.702664 master-0 kubenswrapper[7784]: I0223 13:09:22.701945 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:23.702035 master-0 kubenswrapper[7784]: I0223 13:09:23.701933 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:23.702035 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:23.702035 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:23.702035 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:23.702659 master-0 kubenswrapper[7784]: I0223 13:09:23.702045 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:24.701362 master-0 kubenswrapper[7784]: I0223 13:09:24.701245 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:24.701362 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:24.701362 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:24.701362 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:24.701798 master-0 kubenswrapper[7784]: I0223 13:09:24.701388 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:25.088271 master-0 kubenswrapper[7784]: E0223 13:09:25.088150 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:09:25.702088 master-0 kubenswrapper[7784]: I0223 13:09:25.701986 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:25.702088 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:25.702088 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:25.702088 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:25.702488 master-0 kubenswrapper[7784]: I0223 13:09:25.702083 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:26.701705 master-0 kubenswrapper[7784]: I0223 13:09:26.701571 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:26.701705 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:26.701705 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:26.701705 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:26.702703 master-0 kubenswrapper[7784]: I0223 13:09:26.701732 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:27.701829 master-0 kubenswrapper[7784]: I0223 13:09:27.701683 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:27.701829 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:27.701829 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:27.701829 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:27.701829 master-0 kubenswrapper[7784]: I0223 13:09:27.701816 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:28.595044 master-0 kubenswrapper[7784]: I0223 13:09:28.594972 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:28.702671 master-0 kubenswrapper[7784]: I0223 13:09:28.702553 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:28.702671 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:28.702671 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:28.702671 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:28.702671 master-0 kubenswrapper[7784]: I0223 13:09:28.702640 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:29.701472 master-0 kubenswrapper[7784]: I0223 13:09:29.701381 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:29.701472 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:29.701472 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:29.701472 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:29.701472 master-0 kubenswrapper[7784]: I0223 13:09:29.701449 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:30.515909 master-0 kubenswrapper[7784]: I0223 13:09:30.515803 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:09:30.517159 master-0 kubenswrapper[7784]: E0223 13:09:30.516276 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:09:30.700412 master-0 kubenswrapper[7784]: I0223 13:09:30.700289 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:30.700412 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:30.700412 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:30.700412 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:30.700412 master-0 kubenswrapper[7784]: I0223 13:09:30.700421 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:31.701308 master-0 kubenswrapper[7784]: I0223 13:09:31.701226 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:31.701308 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:31.701308 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:31.701308 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:31.701308 master-0 kubenswrapper[7784]: I0223 13:09:31.701301 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:32.272827 master-0 kubenswrapper[7784]: E0223 13:09:32.272733 7784 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:09:32.272827 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b" Netns:"/var/run/netns/4af33a52-ce53-4896-b1f5-866ed245a22b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:09:32.272827 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:09:32.272827 master-0 kubenswrapper[7784]: > Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: E0223 13:09:32.272850 7784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b" Netns:"/var/run/netns/4af33a52-ce53-4896-b1f5-866ed245a22b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: E0223 13:09:32.272876 7784 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b" Netns:"/var/run/netns/4af33a52-ce53-4896-b1f5-866ed245a22b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:09:32.273271 master-0 kubenswrapper[7784]: E0223 13:09:32.272959 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b\\\" Netns:\\\"/var/run/netns/4af33a52-ce53-4896-b1f5-866ed245a22b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=61e48845bf529e552fb96c6c2875288def50e0d941d973e210455f49f347512b;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" podUID="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Feb 23 13:09:32.418167 master-0 kubenswrapper[7784]: E0223 13:09:32.417746 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:09:32.700953 master-0 kubenswrapper[7784]: I0223 13:09:32.700804 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:32.700953 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:32.700953 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:32.700953 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:32.702127 master-0 kubenswrapper[7784]: I0223 13:09:32.701550 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:33.580820 master-0 kubenswrapper[7784]: I0223 13:09:33.580766 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:33.700707 master-0 kubenswrapper[7784]: I0223 13:09:33.700620 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:33.700707 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:33.700707 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:33.700707 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:33.700707 master-0 kubenswrapper[7784]: I0223 13:09:33.700696 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:34.701411 master-0 kubenswrapper[7784]: I0223 13:09:34.701275 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:34.701411 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:34.701411 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:34.701411 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:34.702449 master-0 kubenswrapper[7784]: I0223 13:09:34.701413 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:35.701011 master-0 kubenswrapper[7784]: I0223 13:09:35.700876 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:35.701011 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:35.701011 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:35.701011 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:35.702234 master-0 kubenswrapper[7784]: I0223 13:09:35.701033 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:36.702318 master-0 kubenswrapper[7784]: I0223 13:09:36.702198 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:36.702318 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:36.702318 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:36.702318 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:36.702318 master-0 kubenswrapper[7784]: I0223 13:09:36.702295 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:37.702110 master-0 kubenswrapper[7784]: I0223 13:09:37.702013 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:37.702110 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:37.702110 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:37.702110 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:37.702110 master-0 kubenswrapper[7784]: I0223 13:09:37.702114 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:38.701909 master-0 kubenswrapper[7784]: I0223 13:09:38.701780 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:38.701909 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:38.701909 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:38.701909 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:38.701909 master-0 kubenswrapper[7784]: I0223 13:09:38.701887 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:39.701526 master-0 kubenswrapper[7784]: I0223 13:09:39.701395 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:39.701526 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:39.701526 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:39.701526 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:39.701526 master-0 kubenswrapper[7784]: I0223 13:09:39.701511 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:40.701928 master-0 kubenswrapper[7784]: I0223 13:09:40.701805 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:40.701928 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:40.701928 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:40.701928 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:40.702919 master-0 kubenswrapper[7784]: I0223 13:09:40.701912 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:41.701987 master-0 kubenswrapper[7784]: I0223 13:09:41.701889 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:41.701987 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:41.701987 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:41.701987 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:41.703677 master-0 kubenswrapper[7784]: I0223 13:09:41.703625 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:41.920262 master-0 kubenswrapper[7784]: E0223 13:09:41.920039 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-server-97rhg.1896e1fc8ef5ec9d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-server-97rhg,UID:bdad149d-da6f-49ac-85e5-deb01f161166,APIVersion:v1,ResourceVersion:10254,FieldPath:spec.containers{machine-config-server},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.263358621 +0000 UTC m=+320.998212254,LastTimestamp:2026-02-23 13:06:18.263358621 +0000 UTC m=+320.998212254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:09:42.090170 master-0 kubenswrapper[7784]: E0223 13:09:42.090038 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:09:42.418612 master-0 kubenswrapper[7784]: E0223 13:09:42.418370 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:09:42.701562 master-0 kubenswrapper[7784]: I0223 13:09:42.701315 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:42.701562 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:42.701562 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:42.701562 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:42.701562 master-0 kubenswrapper[7784]: I0223 13:09:42.701441 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:43.515168 master-0 kubenswrapper[7784]: I0223 13:09:43.515097 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:09:43.515582 master-0 kubenswrapper[7784]: E0223 13:09:43.515537 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:09:43.700472 master-0 kubenswrapper[7784]: I0223 13:09:43.700396 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:43.700472 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:43.700472 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:43.700472 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:43.700752 master-0 kubenswrapper[7784]: I0223 13:09:43.700476 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:44.618850 master-0 kubenswrapper[7784]: I0223 13:09:44.618768 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/3.log" Feb 23 13:09:44.619722 master-0 kubenswrapper[7784]: I0223 13:09:44.619243 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/2.log" Feb 23 13:09:44.619722 master-0 kubenswrapper[7784]: I0223 13:09:44.619333 7784 generic.go:334] "Generic (PLEG): container finished" podID="5793184d-de96-49ad-a060-0fa0cf278a9c" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" exitCode=1 Feb 23 13:09:44.619722 master-0 kubenswrapper[7784]: I0223 13:09:44.619419 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerDied","Data":"52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b"} Feb 23 13:09:44.619722 master-0 kubenswrapper[7784]: I0223 13:09:44.619472 7784 scope.go:117] "RemoveContainer" containerID="ba123bbd600141bc9e8df6c67f73eb77a113317a861d7b4a38912c3838e6ded0" Feb 23 13:09:44.620396 master-0 kubenswrapper[7784]: I0223 13:09:44.620298 7784 scope.go:117] "RemoveContainer" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" Feb 23 13:09:44.620767 master-0 kubenswrapper[7784]: E0223 13:09:44.620715 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:09:44.699931 master-0 kubenswrapper[7784]: I0223 13:09:44.699838 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:44.699931 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:44.699931 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:44.699931 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:44.700321 master-0 kubenswrapper[7784]: I0223 13:09:44.699934 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:45.628283 master-0 kubenswrapper[7784]: I0223 13:09:45.628219 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/3.log" Feb 23 13:09:45.701410 master-0 kubenswrapper[7784]: I0223 13:09:45.701304 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:45.701410 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:45.701410 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:45.701410 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:45.701745 master-0 kubenswrapper[7784]: I0223 13:09:45.701410 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:46.514133 master-0 kubenswrapper[7784]: I0223 13:09:46.514027 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:09:46.515032 master-0 kubenswrapper[7784]: I0223 13:09:46.514978 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:09:46.701260 master-0 kubenswrapper[7784]: I0223 13:09:46.701169 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:46.701260 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:46.701260 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:46.701260 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:46.701835 master-0 kubenswrapper[7784]: I0223 13:09:46.701281 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:47.701407 master-0 kubenswrapper[7784]: I0223 13:09:47.701285 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:47.701407 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:47.701407 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:47.701407 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:47.702620 master-0 kubenswrapper[7784]: I0223 13:09:47.701420 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:48.701970 master-0 kubenswrapper[7784]: I0223 13:09:48.701820 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:48.701970 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:48.701970 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:48.701970 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:48.703078 master-0 kubenswrapper[7784]: I0223 13:09:48.701984 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:49.700948 master-0 kubenswrapper[7784]: I0223 13:09:49.700824 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:49.700948 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:49.700948 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:49.700948 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:49.700948 master-0 kubenswrapper[7784]: I0223 13:09:49.700928 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:50.409069 master-0 kubenswrapper[7784]: E0223 13:09:50.408974 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:09:50.661980 master-0 kubenswrapper[7784]: I0223 13:09:50.661772 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:09:50.661980 master-0 kubenswrapper[7784]: I0223 13:09:50.661824 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:09:50.701259 master-0 kubenswrapper[7784]: I0223 13:09:50.701133 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:50.701259 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:50.701259 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:50.701259 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:50.701699 master-0 kubenswrapper[7784]: I0223 13:09:50.701282 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:51.701259 master-0 kubenswrapper[7784]: I0223 13:09:51.701153 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:51.701259 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:51.701259 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:51.701259 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:51.701259 master-0 kubenswrapper[7784]: I0223 13:09:51.701259 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:52.419385 master-0 kubenswrapper[7784]: E0223 13:09:52.419205 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:09:52.702526 master-0 kubenswrapper[7784]: I0223 13:09:52.702269 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:52.702526 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:52.702526 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:52.702526 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:52.702526 master-0 kubenswrapper[7784]: I0223 13:09:52.702461 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:53.701292 master-0 kubenswrapper[7784]: I0223 13:09:53.701213 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:53.701292 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:53.701292 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:53.701292 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:53.701952 master-0 kubenswrapper[7784]: I0223 13:09:53.701297 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:54.701293 master-0 kubenswrapper[7784]: I0223 13:09:54.701212 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:54.701293 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:54.701293 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:54.701293 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:54.701852 master-0 kubenswrapper[7784]: I0223 13:09:54.701324 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:55.700621 master-0 kubenswrapper[7784]: I0223 13:09:55.700532 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:55.700621 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:55.700621 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:55.700621 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:55.701075 master-0 kubenswrapper[7784]: I0223 13:09:55.700632 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:56.701131 master-0 kubenswrapper[7784]: I0223 13:09:56.701068 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:56.701131 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:56.701131 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:56.701131 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:56.701774 master-0 kubenswrapper[7784]: I0223 13:09:56.701152 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:57.515285 master-0 kubenswrapper[7784]: I0223 13:09:57.515196 7784 scope.go:117] "RemoveContainer" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" Feb 23 13:09:57.515858 master-0 kubenswrapper[7784]: E0223 13:09:57.515604 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:09:57.701696 master-0 kubenswrapper[7784]: I0223 13:09:57.701605 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:57.701696 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:57.701696 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:57.701696 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:57.702866 master-0 kubenswrapper[7784]: I0223 13:09:57.701714 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:58.516006 master-0 kubenswrapper[7784]: I0223 13:09:58.515920 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:09:58.516403 master-0 kubenswrapper[7784]: E0223 13:09:58.516363 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:09:58.702755 master-0 kubenswrapper[7784]: I0223 13:09:58.702624 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:58.702755 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:58.702755 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:58.702755 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:58.703951 master-0 kubenswrapper[7784]: I0223 13:09:58.702758 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:09:59.091137 master-0 kubenswrapper[7784]: E0223 13:09:59.091022 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:09:59.702241 master-0 kubenswrapper[7784]: I0223 13:09:59.702146 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:09:59.702241 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:09:59.702241 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:09:59.702241 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:09:59.703505 master-0 kubenswrapper[7784]: I0223 13:09:59.702253 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:00.702543 master-0 kubenswrapper[7784]: I0223 13:10:00.702395 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:00.702543 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:00.702543 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:00.702543 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:00.702543 master-0 kubenswrapper[7784]: I0223 13:10:00.702537 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:01.701875 master-0 kubenswrapper[7784]: I0223 13:10:01.701725 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:01.701875 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:01.701875 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:01.701875 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:01.702392 master-0 kubenswrapper[7784]: I0223 13:10:01.701909 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:02.420588 master-0 kubenswrapper[7784]: E0223 13:10:02.420508 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:10:02.700923 master-0 kubenswrapper[7784]: I0223 13:10:02.700801 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:02.700923 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:02.700923 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:02.700923 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:02.700923 master-0 kubenswrapper[7784]: I0223 13:10:02.700880 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:03.700556 master-0 kubenswrapper[7784]: I0223 13:10:03.700498 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:03.700556 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:03.700556 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:03.700556 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:03.701104 master-0 kubenswrapper[7784]: I0223 13:10:03.700572 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:04.701471 master-0 kubenswrapper[7784]: I0223 13:10:04.701302 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:04.701471 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:04.701471 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:04.701471 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:04.702905 master-0 kubenswrapper[7784]: I0223 13:10:04.701496 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:05.701730 master-0 kubenswrapper[7784]: I0223 13:10:05.701639 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:05.701730 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:05.701730 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:05.701730 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:05.702893 master-0 kubenswrapper[7784]: I0223 13:10:05.701744 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:06.702203 master-0 kubenswrapper[7784]: I0223 13:10:06.702081 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:06.702203 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:06.702203 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:06.702203 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:06.703198 master-0 kubenswrapper[7784]: I0223 13:10:06.702206 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:07.701971 master-0 kubenswrapper[7784]: I0223 13:10:07.701871 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:07.701971 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:07.701971 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:07.701971 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:07.703081 master-0 kubenswrapper[7784]: I0223 13:10:07.701985 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:08.701667 master-0 kubenswrapper[7784]: I0223 13:10:08.701533 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:08.701667 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:08.701667 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:08.701667 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:08.702274 master-0 kubenswrapper[7784]: I0223 13:10:08.701688 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:09.701543 master-0 kubenswrapper[7784]: I0223 13:10:09.701459 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:09.701543 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:09.701543 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:09.701543 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:09.702816 master-0 kubenswrapper[7784]: I0223 13:10:09.702704 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:10.515090 master-0 kubenswrapper[7784]: I0223 13:10:10.514992 7784 scope.go:117] "RemoveContainer" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" Feb 23 13:10:10.515567 master-0 kubenswrapper[7784]: E0223 13:10:10.515385 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:10:10.701769 master-0 kubenswrapper[7784]: I0223 13:10:10.701658 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:10.701769 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:10.701769 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:10.701769 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:10.701769 master-0 kubenswrapper[7784]: I0223 13:10:10.701765 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:11.701891 master-0 kubenswrapper[7784]: I0223 13:10:11.701816 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:11.701891 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:11.701891 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:11.701891 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:11.701891 master-0 kubenswrapper[7784]: I0223 13:10:11.701902 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:12.421557 master-0 kubenswrapper[7784]: E0223 13:10:12.421469 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:10:12.421557 master-0 kubenswrapper[7784]: E0223 13:10:12.421535 7784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:10:12.702928 master-0 kubenswrapper[7784]: I0223 13:10:12.702739 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:12.702928 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:12.702928 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:12.702928 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:12.702928 master-0 kubenswrapper[7784]: I0223 13:10:12.702890 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:13.515539 master-0 kubenswrapper[7784]: I0223 13:10:13.515462 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:10:13.516170 master-0 kubenswrapper[7784]: E0223 13:10:13.515763 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:10:13.702080 master-0 kubenswrapper[7784]: I0223 13:10:13.701930 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:13.702080 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:13.702080 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:13.702080 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:13.702080 master-0 kubenswrapper[7784]: I0223 13:10:13.702055 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:14.701893 master-0 kubenswrapper[7784]: I0223 13:10:14.701760 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:14.701893 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:14.701893 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:14.701893 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:14.701893 master-0 kubenswrapper[7784]: I0223 13:10:14.701892 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:15.701659 master-0 kubenswrapper[7784]: I0223 13:10:15.701516 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:15.701659 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:15.701659 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:15.701659 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:15.701659 master-0 kubenswrapper[7784]: I0223 13:10:15.701652 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:15.924656 master-0 kubenswrapper[7784]: E0223 13:10:15.924430 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-vwhpv.1896e1fcae48693e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-vwhpv,UID:9e0e3072-a35c-4404-891c-f31fafd0b4b1,APIVersion:v1,ResourceVersion:9979,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container: extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.788858174 +0000 UTC m=+321.523711817,LastTimestamp:2026-02-23 13:06:18.788858174 +0000 UTC m=+321.523711817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:10:16.093259 master-0 kubenswrapper[7784]: E0223 13:10:16.093126 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:10:16.702121 master-0 kubenswrapper[7784]: I0223 13:10:16.702009 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:16.702121 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:16.702121 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:16.702121 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:16.703083 master-0 kubenswrapper[7784]: I0223 13:10:16.702124 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:17.700900 master-0 kubenswrapper[7784]: I0223 13:10:17.700818 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:17.700900 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:17.700900 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:17.700900 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:17.701188 master-0 kubenswrapper[7784]: I0223 13:10:17.700905 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:18.702445 master-0 kubenswrapper[7784]: I0223 13:10:18.702294 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:18.702445 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:18.702445 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:18.702445 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:18.703463 master-0 kubenswrapper[7784]: I0223 13:10:18.702441 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:19.701576 master-0 kubenswrapper[7784]: I0223 13:10:19.701467 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:19.701576 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:19.701576 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:19.701576 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:19.701891 master-0 kubenswrapper[7784]: I0223 13:10:19.701611 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:19.802590 master-0 kubenswrapper[7784]: I0223 13:10:19.802477 7784 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Feb 23 13:10:20.701263 master-0 kubenswrapper[7784]: I0223 13:10:20.701157 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:20.701263 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:20.701263 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:20.701263 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:20.701613 master-0 kubenswrapper[7784]: I0223 13:10:20.701274 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:21.701485 master-0 kubenswrapper[7784]: I0223 13:10:21.701382 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:21.701485 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:21.701485 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:21.701485 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:21.702580 master-0 kubenswrapper[7784]: I0223 13:10:21.701491 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:22.702908 master-0 kubenswrapper[7784]: I0223 13:10:22.702798 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:22.702908 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:22.702908 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:22.702908 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:22.704541 master-0 kubenswrapper[7784]: I0223 13:10:22.702930 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:23.530959 master-0 kubenswrapper[7784]: I0223 13:10:23.530857 7784 scope.go:117] "RemoveContainer" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" Feb 23 13:10:23.534081 master-0 kubenswrapper[7784]: E0223 13:10:23.534011 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-zw4nq_openshift-cluster-storage-operator(5793184d-de96-49ad-a060-0fa0cf278a9c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" podUID="5793184d-de96-49ad-a060-0fa0cf278a9c" Feb 23 13:10:23.701092 master-0 kubenswrapper[7784]: I0223 13:10:23.700964 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:23.701092 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:23.701092 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:23.701092 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:23.701540 master-0 kubenswrapper[7784]: I0223 13:10:23.701108 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:24.516426 master-0 kubenswrapper[7784]: I0223 13:10:24.516245 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:10:24.517102 master-0 kubenswrapper[7784]: E0223 13:10:24.516725 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:10:24.666146 master-0 kubenswrapper[7784]: E0223 13:10:24.666009 7784 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:10:24.702548 master-0 kubenswrapper[7784]: I0223 13:10:24.702435 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:24.702548 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:24.702548 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:24.702548 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:24.702967 master-0 kubenswrapper[7784]: I0223 13:10:24.702573 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:25.702193 master-0 kubenswrapper[7784]: I0223 13:10:25.702113 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:25.702193 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:25.702193 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:25.702193 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:25.702771 master-0 kubenswrapper[7784]: I0223 13:10:25.702207 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:26.701562 master-0 kubenswrapper[7784]: I0223 13:10:26.701432 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:26.701562 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:26.701562 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:26.701562 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:26.702187 master-0 kubenswrapper[7784]: I0223 13:10:26.701566 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:27.701187 master-0 kubenswrapper[7784]: I0223 13:10:27.701098 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:27.701187 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:27.701187 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:27.701187 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:27.702511 master-0 kubenswrapper[7784]: I0223 13:10:27.702456 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:28.701571 master-0 kubenswrapper[7784]: I0223 13:10:28.701429 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:28.701571 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:28.701571 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:28.701571 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:28.702656 master-0 kubenswrapper[7784]: I0223 13:10:28.701582 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:29.701504 master-0 kubenswrapper[7784]: I0223 13:10:29.701428 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:29.701504 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:29.701504 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:29.701504 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:29.702794 master-0 kubenswrapper[7784]: I0223 13:10:29.701527 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:30.701992 master-0 kubenswrapper[7784]: I0223 13:10:30.701762 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:30.701992 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:30.701992 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:30.701992 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:30.701992 master-0 kubenswrapper[7784]: I0223 13:10:30.701923 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:31.700977 master-0 kubenswrapper[7784]: I0223 13:10:31.700888 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:31.700977 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:31.700977 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:31.700977 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:31.700977 master-0 kubenswrapper[7784]: I0223 13:10:31.700970 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:32.618884 master-0 kubenswrapper[7784]: E0223 13:10:32.618603 7784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:10:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:10:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:10:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:10:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:10:32.701168 master-0 kubenswrapper[7784]: I0223 13:10:32.701068 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:32.701168 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:32.701168 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:32.701168 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:32.701594 master-0 kubenswrapper[7784]: I0223 13:10:32.701177 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:33.095623 master-0 kubenswrapper[7784]: E0223 13:10:33.095505 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:10:33.701041 master-0 kubenswrapper[7784]: I0223 13:10:33.700928 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:33.701041 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:33.701041 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:33.701041 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:33.701905 master-0 kubenswrapper[7784]: I0223 13:10:33.701037 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:34.515393 master-0 kubenswrapper[7784]: I0223 13:10:34.515294 7784 scope.go:117] "RemoveContainer" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" Feb 23 13:10:34.701812 master-0 kubenswrapper[7784]: I0223 13:10:34.701683 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:34.701812 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:34.701812 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:34.701812 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:34.701812 master-0 kubenswrapper[7784]: I0223 13:10:34.701810 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:34.998986 master-0 kubenswrapper[7784]: I0223 13:10:34.998803 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/3.log" Feb 23 13:10:34.998986 master-0 kubenswrapper[7784]: I0223 13:10:34.998889 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" event={"ID":"5793184d-de96-49ad-a060-0fa0cf278a9c","Type":"ContainerStarted","Data":"bbe08e09bd028a763822645d92b3c4f52cf966fad6781d8dd8aab117094f4c03"} Feb 23 13:10:35.701048 master-0 kubenswrapper[7784]: I0223 13:10:35.700948 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:35.701048 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:35.701048 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:35.701048 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:35.701048 master-0 kubenswrapper[7784]: I0223 13:10:35.701030 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:36.701278 master-0 kubenswrapper[7784]: I0223 13:10:36.701156 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:36.701278 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:36.701278 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:36.701278 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:36.701278 master-0 kubenswrapper[7784]: I0223 13:10:36.701231 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:37.515948 master-0 kubenswrapper[7784]: I0223 13:10:37.515828 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:10:37.516471 master-0 kubenswrapper[7784]: E0223 13:10:37.516413 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:10:37.702030 master-0 kubenswrapper[7784]: I0223 13:10:37.701898 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:37.702030 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:37.702030 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:37.702030 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:37.702995 master-0 kubenswrapper[7784]: I0223 13:10:37.702061 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:38.701593 master-0 kubenswrapper[7784]: I0223 13:10:38.701463 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:38.701593 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:38.701593 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:38.701593 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:38.703033 master-0 kubenswrapper[7784]: I0223 13:10:38.701603 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:39.701532 master-0 kubenswrapper[7784]: I0223 13:10:39.701413 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:39.701532 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:39.701532 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:39.701532 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:39.701532 master-0 kubenswrapper[7784]: I0223 13:10:39.701524 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:40.701609 master-0 kubenswrapper[7784]: I0223 13:10:40.701521 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:40.701609 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:40.701609 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:40.701609 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:40.702555 master-0 kubenswrapper[7784]: I0223 13:10:40.701641 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:41.701576 master-0 kubenswrapper[7784]: I0223 13:10:41.701471 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:41.701576 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:41.701576 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:41.701576 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:41.701576 master-0 kubenswrapper[7784]: I0223 13:10:41.701568 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:42.702040 master-0 kubenswrapper[7784]: I0223 13:10:42.701893 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:42.702040 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:42.702040 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:42.702040 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:42.703055 master-0 kubenswrapper[7784]: I0223 13:10:42.702073 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:43.701596 master-0 kubenswrapper[7784]: I0223 13:10:43.701453 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:43.701596 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:43.701596 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:43.701596 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:43.701992 master-0 kubenswrapper[7784]: I0223 13:10:43.701605 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:44.701067 master-0 kubenswrapper[7784]: I0223 13:10:44.700930 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:44.701067 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:44.701067 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:44.701067 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:44.701067 master-0 kubenswrapper[7784]: I0223 13:10:44.701041 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:45.701258 master-0 kubenswrapper[7784]: I0223 13:10:45.701131 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:45.701258 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:45.701258 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:45.701258 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:45.701258 master-0 kubenswrapper[7784]: I0223 13:10:45.701250 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:46.702138 master-0 kubenswrapper[7784]: I0223 13:10:46.702031 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:46.702138 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:46.702138 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:46.702138 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:46.703127 master-0 kubenswrapper[7784]: I0223 13:10:46.702157 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:47.190859 master-0 kubenswrapper[7784]: E0223 13:10:47.190792 7784 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:10:47.190859 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380" Netns:"/var/run/netns/96f61d88-069d-4344-944f-b3b736645457" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:10:47.190859 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:10:47.190859 master-0 kubenswrapper[7784]: > Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: E0223 13:10:47.190870 7784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380" Netns:"/var/run/netns/96f61d88-069d-4344-944f-b3b736645457" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: E0223 13:10:47.190894 7784 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380" Netns:"/var/run/netns/96f61d88-069d-4344-944f-b3b736645457" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:10:47.191308 master-0 kubenswrapper[7784]: E0223 13:10:47.190953 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager(e6f93af9-bdbb-4319-8ddb-e5458e8a9275)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-lqc9w_openshift-operator-lifecycle-manager_e6f93af9-bdbb-4319-8ddb-e5458e8a9275_0(bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-lqc9w to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380\\\" Netns:\\\"/var/run/netns/96f61d88-069d-4344-944f-b3b736645457\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-lqc9w;K8S_POD_INFRA_CONTAINER_ID=bd1dc005aa5550a695a929dbba15a2002591d20487f1d0838955f8a93f1b8380;K8S_POD_UID=e6f93af9-bdbb-4319-8ddb-e5458e8a9275\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w/e6f93af9-bdbb-4319-8ddb-e5458e8a9275]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-lqc9w in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-lqc9w?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" podUID="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" Feb 23 13:10:47.700670 master-0 kubenswrapper[7784]: I0223 13:10:47.700566 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:47.700670 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:47.700670 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:47.700670 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:47.701113 master-0 kubenswrapper[7784]: I0223 13:10:47.700704 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:48.700853 master-0 kubenswrapper[7784]: I0223 13:10:48.700801 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:48.700853 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:48.700853 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:48.700853 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:48.701473 master-0 kubenswrapper[7784]: I0223 13:10:48.701442 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:49.701384 master-0 kubenswrapper[7784]: I0223 13:10:49.701256 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:49.701384 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:49.701384 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:49.701384 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:49.702315 master-0 kubenswrapper[7784]: I0223 13:10:49.701385 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:49.928056 master-0 kubenswrapper[7784]: E0223 13:10:49.927903 7784 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{prometheus-operator-admission-webhook-75d56db95f-ld22t.1896e1fcb2bcf544 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:prometheus-operator-admission-webhook-75d56db95f-ld22t,UID:54001c8e-cb57-47dc-8594-9daed4190bda,APIVersion:v1,ResourceVersion:10146,FieldPath:spec.containers{prometheus-operator-admission-webhook},},Reason:Created,Message:Created container: prometheus-operator-admission-webhook,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:06:18.86360506 +0000 UTC m=+321.598458703,LastTimestamp:2026-02-23 13:06:18.86360506 +0000 UTC m=+321.598458703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:10:50.096256 master-0 kubenswrapper[7784]: E0223 13:10:50.096182 7784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:10:50.702110 master-0 kubenswrapper[7784]: I0223 13:10:50.702024 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:50.702110 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:50.702110 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:50.702110 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:50.703123 master-0 kubenswrapper[7784]: I0223 13:10:50.702123 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:51.515409 master-0 kubenswrapper[7784]: I0223 13:10:51.515316 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:10:51.515722 master-0 kubenswrapper[7784]: E0223 13:10:51.515639 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:10:51.700455 master-0 kubenswrapper[7784]: I0223 13:10:51.700327 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:51.700455 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:51.700455 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:51.700455 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:51.700932 master-0 kubenswrapper[7784]: I0223 13:10:51.700467 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:52.701208 master-0 kubenswrapper[7784]: I0223 13:10:52.701064 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:52.701208 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:52.701208 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:52.701208 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:52.701208 master-0 kubenswrapper[7784]: I0223 13:10:52.701187 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:53.700254 master-0 kubenswrapper[7784]: I0223 13:10:53.700175 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:53.700254 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:53.700254 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:53.700254 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:53.700552 master-0 kubenswrapper[7784]: I0223 13:10:53.700268 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:54.700540 master-0 kubenswrapper[7784]: I0223 13:10:54.700407 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:54.700540 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:54.700540 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:54.700540 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:54.700540 master-0 kubenswrapper[7784]: I0223 13:10:54.700524 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:55.701492 master-0 kubenswrapper[7784]: I0223 13:10:55.701411 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:55.701492 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:55.701492 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:55.701492 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:55.702078 master-0 kubenswrapper[7784]: I0223 13:10:55.701526 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:56.701521 master-0 kubenswrapper[7784]: I0223 13:10:56.701367 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:56.701521 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:56.701521 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:56.701521 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:56.701521 master-0 kubenswrapper[7784]: I0223 13:10:56.701448 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:57.700553 master-0 kubenswrapper[7784]: I0223 13:10:57.700402 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:57.700553 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:57.700553 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:57.700553 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:57.700553 master-0 kubenswrapper[7784]: I0223 13:10:57.700528 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:58.702390 master-0 kubenswrapper[7784]: I0223 13:10:58.702269 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:58.702390 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:58.702390 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:58.702390 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:58.703629 master-0 kubenswrapper[7784]: I0223 13:10:58.702405 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:59.165762 master-0 kubenswrapper[7784]: I0223 13:10:59.165625 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/cluster-autoscaler-operator/0.log" Feb 23 13:10:59.166142 master-0 kubenswrapper[7784]: I0223 13:10:59.166102 7784 generic.go:334] "Generic (PLEG): container finished" podID="bf57b864-25d7-4420-9052-04dd580a9f7d" containerID="d0f028f5c9ba3cbdb9aa71d077d68cd25f9f1bd1f015e402871ed79b04b1c8f3" exitCode=255 Feb 23 13:10:59.166214 master-0 kubenswrapper[7784]: I0223 13:10:59.166180 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" event={"ID":"bf57b864-25d7-4420-9052-04dd580a9f7d","Type":"ContainerDied","Data":"d0f028f5c9ba3cbdb9aa71d077d68cd25f9f1bd1f015e402871ed79b04b1c8f3"} Feb 23 13:10:59.166746 master-0 kubenswrapper[7784]: I0223 13:10:59.166722 7784 scope.go:117] "RemoveContainer" containerID="d0f028f5c9ba3cbdb9aa71d077d68cd25f9f1bd1f015e402871ed79b04b1c8f3" Feb 23 13:10:59.168248 master-0 kubenswrapper[7784]: I0223 13:10:59.168211 7784 generic.go:334] "Generic (PLEG): container finished" podID="f2c50f9a-8c73-4cb9-9cbf-2565496212a6" containerID="ea1eb72990e94dc776f51ab63d27faea76bd89ac6903bb508a9edd4321ae5a8a" exitCode=0 Feb 23 13:10:59.168366 master-0 kubenswrapper[7784]: I0223 13:10:59.168304 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" event={"ID":"f2c50f9a-8c73-4cb9-9cbf-2565496212a6","Type":"ContainerDied","Data":"ea1eb72990e94dc776f51ab63d27faea76bd89ac6903bb508a9edd4321ae5a8a"} Feb 23 13:10:59.168948 master-0 kubenswrapper[7784]: I0223 13:10:59.168921 7784 scope.go:117] "RemoveContainer" containerID="ea1eb72990e94dc776f51ab63d27faea76bd89ac6903bb508a9edd4321ae5a8a" Feb 23 13:10:59.169914 master-0 kubenswrapper[7784]: I0223 13:10:59.169873 7784 generic.go:334] "Generic (PLEG): container finished" podID="f47fa225-93fd-458b-b450-a0411e629afd" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" exitCode=0 Feb 23 13:10:59.169954 master-0 kubenswrapper[7784]: I0223 13:10:59.169903 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerDied","Data":"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b"} Feb 23 13:10:59.170172 master-0 kubenswrapper[7784]: I0223 13:10:59.170149 7784 scope.go:117] "RemoveContainer" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" Feb 23 13:10:59.173622 master-0 kubenswrapper[7784]: I0223 13:10:59.173589 7784 generic.go:334] "Generic (PLEG): container finished" podID="affc63b7-db45-429d-82ff-e50f6aae51dc" containerID="41202e9f2790a7f6235a0ce9eb87baca7cb432343b22dcbd777e862cc1562fd9" exitCode=0 Feb 23 13:10:59.173684 master-0 kubenswrapper[7784]: I0223 13:10:59.173653 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" event={"ID":"affc63b7-db45-429d-82ff-e50f6aae51dc","Type":"ContainerDied","Data":"41202e9f2790a7f6235a0ce9eb87baca7cb432343b22dcbd777e862cc1562fd9"} Feb 23 13:10:59.173987 master-0 kubenswrapper[7784]: I0223 13:10:59.173966 7784 scope.go:117] "RemoveContainer" containerID="41202e9f2790a7f6235a0ce9eb87baca7cb432343b22dcbd777e862cc1562fd9" Feb 23 13:10:59.176156 master-0 kubenswrapper[7784]: I0223 13:10:59.176127 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/machine-api-operator/0.log" Feb 23 13:10:59.176541 master-0 kubenswrapper[7784]: I0223 13:10:59.176511 7784 generic.go:334] "Generic (PLEG): container finished" podID="47dedc5d-1288-4020-b481-5dca68a7d437" containerID="03c434f6de970d6fadea568234ec0af471fa3dec238b0bd5f6a6179ccb8e7df1" exitCode=255 Feb 23 13:10:59.176588 master-0 kubenswrapper[7784]: I0223 13:10:59.176543 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" event={"ID":"47dedc5d-1288-4020-b481-5dca68a7d437","Type":"ContainerDied","Data":"03c434f6de970d6fadea568234ec0af471fa3dec238b0bd5f6a6179ccb8e7df1"} Feb 23 13:10:59.176883 master-0 kubenswrapper[7784]: I0223 13:10:59.176858 7784 scope.go:117] "RemoveContainer" containerID="03c434f6de970d6fadea568234ec0af471fa3dec238b0bd5f6a6179ccb8e7df1" Feb 23 13:10:59.178073 master-0 kubenswrapper[7784]: I0223 13:10:59.178042 7784 generic.go:334] "Generic (PLEG): container finished" podID="92eaa2e2-61cd-4279-a81f-72db51308148" containerID="2e40109d34052395c159362b1fc60377679fbb682b53af5d56f614bb5eac078e" exitCode=0 Feb 23 13:10:59.178117 master-0 kubenswrapper[7784]: I0223 13:10:59.178090 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" event={"ID":"92eaa2e2-61cd-4279-a81f-72db51308148","Type":"ContainerDied","Data":"2e40109d34052395c159362b1fc60377679fbb682b53af5d56f614bb5eac078e"} Feb 23 13:10:59.178349 master-0 kubenswrapper[7784]: I0223 13:10:59.178318 7784 scope.go:117] "RemoveContainer" containerID="2e40109d34052395c159362b1fc60377679fbb682b53af5d56f614bb5eac078e" Feb 23 13:10:59.180579 master-0 kubenswrapper[7784]: I0223 13:10:59.180538 7784 generic.go:334] "Generic (PLEG): container finished" podID="4b9d6485-cf67-49c5-99c1-b8582a0bab70" containerID="9e1ed7ebf6d1fa17181b895f05d45d093802e57011b02b870185acec2590ca56" exitCode=0 Feb 23 13:10:59.180677 master-0 kubenswrapper[7784]: I0223 13:10:59.180594 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" event={"ID":"4b9d6485-cf67-49c5-99c1-b8582a0bab70","Type":"ContainerDied","Data":"9e1ed7ebf6d1fa17181b895f05d45d093802e57011b02b870185acec2590ca56"} Feb 23 13:10:59.180950 master-0 kubenswrapper[7784]: I0223 13:10:59.180927 7784 scope.go:117] "RemoveContainer" containerID="9e1ed7ebf6d1fa17181b895f05d45d093802e57011b02b870185acec2590ca56" Feb 23 13:10:59.182585 master-0 kubenswrapper[7784]: I0223 13:10:59.182552 7784 generic.go:334] "Generic (PLEG): container finished" podID="24d878bd-05cd-414e-94c1-a3e9ce637331" containerID="31acf0de4b73cbfff55422610e960c624d806171dcec6aaeddd658a636224147" exitCode=0 Feb 23 13:10:59.182639 master-0 kubenswrapper[7784]: I0223 13:10:59.182610 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" event={"ID":"24d878bd-05cd-414e-94c1-a3e9ce637331","Type":"ContainerDied","Data":"31acf0de4b73cbfff55422610e960c624d806171dcec6aaeddd658a636224147"} Feb 23 13:10:59.182866 master-0 kubenswrapper[7784]: I0223 13:10:59.182845 7784 scope.go:117] "RemoveContainer" containerID="31acf0de4b73cbfff55422610e960c624d806171dcec6aaeddd658a636224147" Feb 23 13:10:59.184660 master-0 kubenswrapper[7784]: I0223 13:10:59.184630 7784 generic.go:334] "Generic (PLEG): container finished" podID="d9b02d3c-f671-4850-8c6e-315044a1376c" containerID="a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a" exitCode=0 Feb 23 13:10:59.184712 master-0 kubenswrapper[7784]: I0223 13:10:59.184654 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerDied","Data":"a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a"} Feb 23 13:10:59.184712 master-0 kubenswrapper[7784]: I0223 13:10:59.184696 7784 scope.go:117] "RemoveContainer" containerID="f9f2d3833534ce883ca50eb44438eaa5f1540dd7900a3929b7c7f66a4a78289a" Feb 23 13:10:59.184910 master-0 kubenswrapper[7784]: I0223 13:10:59.184891 7784 scope.go:117] "RemoveContainer" containerID="a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a" Feb 23 13:10:59.185060 master-0 kubenswrapper[7784]: E0223 13:10:59.185036 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-dk5t4_openshift-etcd-operator(d9b02d3c-f671-4850-8c6e-315044a1376c)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" podUID="d9b02d3c-f671-4850-8c6e-315044a1376c" Feb 23 13:10:59.186914 master-0 kubenswrapper[7784]: I0223 13:10:59.186890 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54" exitCode=0 Feb 23 13:10:59.186985 master-0 kubenswrapper[7784]: I0223 13:10:59.186931 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54"} Feb 23 13:10:59.187585 master-0 kubenswrapper[7784]: I0223 13:10:59.187561 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:10:59.187585 master-0 kubenswrapper[7784]: I0223 13:10:59.187584 7784 scope.go:117] "RemoveContainer" containerID="803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54" Feb 23 13:10:59.188752 master-0 kubenswrapper[7784]: I0223 13:10:59.188731 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-sj5wd_0d58817c-970f-47b1-a5a5-a491f3e93426/cluster-node-tuning-operator/0.log" Feb 23 13:10:59.188810 master-0 kubenswrapper[7784]: I0223 13:10:59.188766 7784 generic.go:334] "Generic (PLEG): container finished" podID="0d58817c-970f-47b1-a5a5-a491f3e93426" containerID="dc977fa44eb94c7d2786be97eca168973cfb38931e1f243a628741f8ff82c479" exitCode=1 Feb 23 13:10:59.188840 master-0 kubenswrapper[7784]: I0223 13:10:59.188809 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" event={"ID":"0d58817c-970f-47b1-a5a5-a491f3e93426","Type":"ContainerDied","Data":"dc977fa44eb94c7d2786be97eca168973cfb38931e1f243a628741f8ff82c479"} Feb 23 13:10:59.189073 master-0 kubenswrapper[7784]: I0223 13:10:59.189053 7784 scope.go:117] "RemoveContainer" containerID="dc977fa44eb94c7d2786be97eca168973cfb38931e1f243a628741f8ff82c479" Feb 23 13:10:59.190940 master-0 kubenswrapper[7784]: I0223 13:10:59.190902 7784 generic.go:334] "Generic (PLEG): container finished" podID="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" containerID="26a1186ff59907fd2f96cc97b54c6ac88a7c2c4d965d9c749c34381a74f361a9" exitCode=0 Feb 23 13:10:59.190991 master-0 kubenswrapper[7784]: I0223 13:10:59.190965 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" event={"ID":"a663ecaf-ced2-4c7d-91c8-44e94851f7d6","Type":"ContainerDied","Data":"26a1186ff59907fd2f96cc97b54c6ac88a7c2c4d965d9c749c34381a74f361a9"} Feb 23 13:10:59.191436 master-0 kubenswrapper[7784]: I0223 13:10:59.191404 7784 scope.go:117] "RemoveContainer" containerID="26a1186ff59907fd2f96cc97b54c6ac88a7c2c4d965d9c749c34381a74f361a9" Feb 23 13:10:59.193351 master-0 kubenswrapper[7784]: I0223 13:10:59.193300 7784 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="32088e4f48e7f6e833fcb88730321358fea1c298090821bdfe130f137e38f95c" exitCode=0 Feb 23 13:10:59.193403 master-0 kubenswrapper[7784]: I0223 13:10:59.193363 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerDied","Data":"32088e4f48e7f6e833fcb88730321358fea1c298090821bdfe130f137e38f95c"} Feb 23 13:10:59.193898 master-0 kubenswrapper[7784]: I0223 13:10:59.193865 7784 scope.go:117] "RemoveContainer" containerID="32088e4f48e7f6e833fcb88730321358fea1c298090821bdfe130f137e38f95c" Feb 23 13:10:59.701050 master-0 kubenswrapper[7784]: I0223 13:10:59.700904 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:10:59.701050 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:10:59.701050 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:10:59.701050 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:10:59.701266 master-0 kubenswrapper[7784]: I0223 13:10:59.701049 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:10:59.906724 master-0 kubenswrapper[7784]: I0223 13:10:59.906658 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:10:59.907067 master-0 kubenswrapper[7784]: I0223 13:10:59.906739 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:00.066364 master-0 kubenswrapper[7784]: E0223 13:11:00.065476 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 23 13:11:00.201247 master-0 kubenswrapper[7784]: I0223 13:11:00.201181 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" event={"ID":"affc63b7-db45-429d-82ff-e50f6aae51dc","Type":"ContainerStarted","Data":"f0260164e49b92f203d564cb2fbb202345c94b9a0c4876b583af6cb3702cae5c"} Feb 23 13:11:00.203911 master-0 kubenswrapper[7784]: I0223 13:11:00.203855 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/cluster-autoscaler-operator/0.log" Feb 23 13:11:00.204290 master-0 kubenswrapper[7784]: I0223 13:11:00.204246 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" event={"ID":"bf57b864-25d7-4420-9052-04dd580a9f7d","Type":"ContainerStarted","Data":"72b80c28de4d630c326e6ecb2c7fe4abde6ff1d593d9c073d595ed0bbe3fb39c"} Feb 23 13:11:00.206082 master-0 kubenswrapper[7784]: I0223 13:11:00.205965 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" event={"ID":"92eaa2e2-61cd-4279-a81f-72db51308148","Type":"ContainerStarted","Data":"e62cde24645a2536470b50ea833e818a097ebf33c84e4b8eb2e7095e5b5973b8"} Feb 23 13:11:00.208480 master-0 kubenswrapper[7784]: I0223 13:11:00.208428 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" event={"ID":"24d878bd-05cd-414e-94c1-a3e9ce637331","Type":"ContainerStarted","Data":"d19f574d5af7e29a3e26b1f415a4e4b0c4be90660daac81c106a282690f89ecd"} Feb 23 13:11:00.211767 master-0 kubenswrapper[7784]: I0223 13:11:00.211730 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"e83f60b44b83cfd6e3f9aea87eba10757c2f61020bb495edff5a188472446875"} Feb 23 13:11:00.212056 master-0 kubenswrapper[7784]: I0223 13:11:00.212026 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:11:00.215297 master-0 kubenswrapper[7784]: I0223 13:11:00.214976 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-sj5wd_0d58817c-970f-47b1-a5a5-a491f3e93426/cluster-node-tuning-operator/0.log" Feb 23 13:11:00.215297 master-0 kubenswrapper[7784]: I0223 13:11:00.215042 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" event={"ID":"0d58817c-970f-47b1-a5a5-a491f3e93426","Type":"ContainerStarted","Data":"3729f760982c023099ff4d396619166453ff4b316d56501de699902f0305fc3e"} Feb 23 13:11:00.217693 master-0 kubenswrapper[7784]: I0223 13:11:00.217650 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" event={"ID":"a663ecaf-ced2-4c7d-91c8-44e94851f7d6","Type":"ContainerStarted","Data":"8f483d453540f2698a0e8e83295049e5d47fc8977653a803a70ffe1b0493e568"} Feb 23 13:11:00.219497 master-0 kubenswrapper[7784]: I0223 13:11:00.219457 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerStarted","Data":"b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1"} Feb 23 13:11:00.219926 master-0 kubenswrapper[7784]: I0223 13:11:00.219889 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:00.221400 master-0 kubenswrapper[7784]: I0223 13:11:00.221363 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/machine-api-operator/0.log" Feb 23 13:11:00.221729 master-0 kubenswrapper[7784]: I0223 13:11:00.221652 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" event={"ID":"47dedc5d-1288-4020-b481-5dca68a7d437","Type":"ContainerStarted","Data":"167f519b520457d5390e3f3ffcc97750f30cadd49413a4376665e07d410f9843"} Feb 23 13:11:00.224500 master-0 kubenswrapper[7784]: I0223 13:11:00.223744 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" event={"ID":"4b9d6485-cf67-49c5-99c1-b8582a0bab70","Type":"ContainerStarted","Data":"9e9e4fb787f881e77ccd5241ded6efe8cf9366c412a9300ede2897df26c0f9a6"} Feb 23 13:11:00.225720 master-0 kubenswrapper[7784]: I0223 13:11:00.225679 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" event={"ID":"f2c50f9a-8c73-4cb9-9cbf-2565496212a6","Type":"ContainerStarted","Data":"a97c60c9dd649d321ae9d5f5d77c2e1d48f01e8d8523477752eda5ca2f1342f0"} Feb 23 13:11:00.228126 master-0 kubenswrapper[7784]: I0223 13:11:00.227578 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerStarted","Data":"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca"} Feb 23 13:11:00.228126 master-0 kubenswrapper[7784]: I0223 13:11:00.227922 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:11:00.443110 master-0 kubenswrapper[7784]: I0223 13:11:00.443046 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:11:00.702217 master-0 kubenswrapper[7784]: I0223 13:11:00.702112 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:00.702217 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:00.702217 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:00.702217 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:00.702530 master-0 kubenswrapper[7784]: I0223 13:11:00.702235 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:01.240472 master-0 kubenswrapper[7784]: I0223 13:11:01.240395 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d30622693465b0b62d620607efa00658fed43c117d15217ddcd12f4e9ddc2419"} Feb 23 13:11:01.704677 master-0 kubenswrapper[7784]: I0223 13:11:01.704575 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:01.704677 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:01.704677 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:01.704677 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:01.705031 master-0 kubenswrapper[7784]: I0223 13:11:01.704710 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:02.515063 master-0 kubenswrapper[7784]: I0223 13:11:02.514959 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:11:02.515961 master-0 kubenswrapper[7784]: I0223 13:11:02.515529 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:11:02.631856 master-0 kubenswrapper[7784]: I0223 13:11:02.631769 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:02.700700 master-0 kubenswrapper[7784]: I0223 13:11:02.700629 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:02.700700 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:02.700700 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:02.700700 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:02.700700 master-0 kubenswrapper[7784]: I0223 13:11:02.700685 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:03.700245 master-0 kubenswrapper[7784]: I0223 13:11:03.700164 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:03.700245 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:03.700245 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:03.700245 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:03.700836 master-0 kubenswrapper[7784]: I0223 13:11:03.700248 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:04.701130 master-0 kubenswrapper[7784]: I0223 13:11:04.701029 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:04.701130 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:04.701130 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:04.701130 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:04.701130 master-0 kubenswrapper[7784]: I0223 13:11:04.701125 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:05.700977 master-0 kubenswrapper[7784]: I0223 13:11:05.700903 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:05.700977 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:05.700977 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:05.700977 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:05.700977 master-0 kubenswrapper[7784]: I0223 13:11:05.700970 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:05.906187 master-0 kubenswrapper[7784]: I0223 13:11:05.906071 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:05.906187 master-0 kubenswrapper[7784]: I0223 13:11:05.906066 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:05.906187 master-0 kubenswrapper[7784]: I0223 13:11:05.906150 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:05.906187 master-0 kubenswrapper[7784]: I0223 13:11:05.906203 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:06.290212 master-0 kubenswrapper[7784]: I0223 13:11:06.290123 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:06.296326 master-0 kubenswrapper[7784]: I0223 13:11:06.296277 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:06.368130 master-0 kubenswrapper[7784]: I0223 13:11:06.368075 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:06.375911 master-0 kubenswrapper[7784]: I0223 13:11:06.375853 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:06.701151 master-0 kubenswrapper[7784]: I0223 13:11:06.700983 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:06.701151 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:06.701151 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:06.701151 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:06.701151 master-0 kubenswrapper[7784]: I0223 13:11:06.701108 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:06.701887 master-0 kubenswrapper[7784]: I0223 13:11:06.701198 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:11:06.702304 master-0 kubenswrapper[7784]: I0223 13:11:06.702248 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88"} pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" containerMessage="Container router failed startup probe, will be restarted" Feb 23 13:11:06.702370 master-0 kubenswrapper[7784]: I0223 13:11:06.702319 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" containerID="cri-o://530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" gracePeriod=3600 Feb 23 13:11:07.283083 master-0 kubenswrapper[7784]: I0223 13:11:07.283003 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:08.906631 master-0 kubenswrapper[7784]: I0223 13:11:08.906528 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:08.907258 master-0 kubenswrapper[7784]: I0223 13:11:08.906631 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:08.907258 master-0 kubenswrapper[7784]: I0223 13:11:08.906534 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:08.907258 master-0 kubenswrapper[7784]: I0223 13:11:08.906766 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:09.515153 master-0 kubenswrapper[7784]: I0223 13:11:09.515066 7784 scope.go:117] "RemoveContainer" containerID="a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a" Feb 23 13:11:10.321248 master-0 kubenswrapper[7784]: I0223 13:11:10.321172 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" event={"ID":"d9b02d3c-f671-4850-8c6e-315044a1376c","Type":"ContainerStarted","Data":"26392140e4ac45d445d14d5125b2dc3b2cd3ba1e82cf0bdf72a69892b08e69f2"} Feb 23 13:11:10.482601 master-0 kubenswrapper[7784]: I0223 13:11:10.482523 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:10.488096 master-0 kubenswrapper[7784]: I0223 13:11:10.488060 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:11:11.907208 master-0 kubenswrapper[7784]: I0223 13:11:11.907119 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:11.907829 master-0 kubenswrapper[7784]: I0223 13:11:11.907156 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:11.907829 master-0 kubenswrapper[7784]: I0223 13:11:11.907246 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:11.907829 master-0 kubenswrapper[7784]: I0223 13:11:11.907313 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:11.907829 master-0 kubenswrapper[7784]: I0223 13:11:11.907380 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:11.908643 master-0 kubenswrapper[7784]: I0223 13:11:11.908594 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 23 13:11:11.908702 master-0 kubenswrapper[7784]: I0223 13:11:11.908680 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" containerID="cri-o://b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1" gracePeriod=30 Feb 23 13:11:11.908816 master-0 kubenswrapper[7784]: I0223 13:11:11.908785 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:11.909768 master-0 kubenswrapper[7784]: I0223 13:11:11.909479 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:12.337737 master-0 kubenswrapper[7784]: I0223 13:11:12.337660 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/1.log" Feb 23 13:11:12.338781 master-0 kubenswrapper[7784]: I0223 13:11:12.338735 7784 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1" exitCode=255 Feb 23 13:11:12.338853 master-0 kubenswrapper[7784]: I0223 13:11:12.338784 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerDied","Data":"b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1"} Feb 23 13:11:12.338853 master-0 kubenswrapper[7784]: I0223 13:11:12.338826 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerStarted","Data":"0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94"} Feb 23 13:11:12.338937 master-0 kubenswrapper[7784]: I0223 13:11:12.338851 7784 scope.go:117] "RemoveContainer" containerID="32088e4f48e7f6e833fcb88730321358fea1c298090821bdfe130f137e38f95c" Feb 23 13:11:12.339540 master-0 kubenswrapper[7784]: I0223 13:11:12.339489 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:12.339619 master-0 kubenswrapper[7784]: I0223 13:11:12.339559 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:13.349325 master-0 kubenswrapper[7784]: I0223 13:11:13.349237 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/1.log" Feb 23 13:11:14.906280 master-0 kubenswrapper[7784]: I0223 13:11:14.906223 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:16.468266 master-0 kubenswrapper[7784]: I0223 13:11:16.468188 7784 patch_prober.go:28] interesting pod/machine-config-daemon-q8bjq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:11:16.469264 master-0 kubenswrapper[7784]: I0223 13:11:16.468286 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:11:17.907140 master-0 kubenswrapper[7784]: I0223 13:11:17.907065 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:17.907140 master-0 kubenswrapper[7784]: I0223 13:11:17.907137 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:17.907940 master-0 kubenswrapper[7784]: I0223 13:11:17.907175 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:17.907940 master-0 kubenswrapper[7784]: I0223 13:11:17.907255 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:19.805049 master-0 kubenswrapper[7784]: I0223 13:11:19.804766 7784 status_manager.go:851] "Failed to get status for pod" podUID="bdad149d-da6f-49ac-85e5-deb01f161166" pod="openshift-machine-config-operator/machine-config-server-97rhg" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods machine-config-server-97rhg)" Feb 23 13:11:20.906998 master-0 kubenswrapper[7784]: I0223 13:11:20.906914 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:20.907651 master-0 kubenswrapper[7784]: I0223 13:11:20.907014 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:20.907651 master-0 kubenswrapper[7784]: I0223 13:11:20.907111 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:20.907651 master-0 kubenswrapper[7784]: I0223 13:11:20.907246 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:23.906817 master-0 kubenswrapper[7784]: I0223 13:11:23.906693 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:23.906817 master-0 kubenswrapper[7784]: I0223 13:11:23.906805 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:23.907942 master-0 kubenswrapper[7784]: I0223 13:11:23.906704 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:23.907942 master-0 kubenswrapper[7784]: I0223 13:11:23.906882 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:23.907942 master-0 kubenswrapper[7784]: I0223 13:11:23.906917 7784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:23.907942 master-0 kubenswrapper[7784]: I0223 13:11:23.907772 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 23 13:11:23.907942 master-0 kubenswrapper[7784]: I0223 13:11:23.907842 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" containerID="cri-o://0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" gracePeriod=30 Feb 23 13:11:23.908164 master-0 kubenswrapper[7784]: I0223 13:11:23.907909 7784 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-8wrb6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 23 13:11:23.908164 master-0 kubenswrapper[7784]: I0223 13:11:23.908028 7784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 23 13:11:24.441888 master-0 kubenswrapper[7784]: I0223 13:11:24.441835 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/2.log" Feb 23 13:11:24.442460 master-0 kubenswrapper[7784]: I0223 13:11:24.442420 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/1.log" Feb 23 13:11:24.442977 master-0 kubenswrapper[7784]: I0223 13:11:24.442927 7784 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" exitCode=255 Feb 23 13:11:24.443023 master-0 kubenswrapper[7784]: I0223 13:11:24.442984 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerDied","Data":"0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94"} Feb 23 13:11:24.443063 master-0 kubenswrapper[7784]: I0223 13:11:24.443040 7784 scope.go:117] "RemoveContainer" containerID="b0b430292a4cb196f993cfa8305829b09c246ab695c4e931dbd9b6a8b8bff0b1" Feb 23 13:11:24.446352 master-0 kubenswrapper[7784]: E0223 13:11:24.446295 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-8wrb6_openshift-config-operator(90a694bb-fe3e-4478-bbb4-d2be9cd4c57f)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" Feb 23 13:11:25.454862 master-0 kubenswrapper[7784]: I0223 13:11:25.454731 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/2.log" Feb 23 13:11:25.456329 master-0 kubenswrapper[7784]: I0223 13:11:25.456283 7784 scope.go:117] "RemoveContainer" containerID="0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" Feb 23 13:11:25.456660 master-0 kubenswrapper[7784]: E0223 13:11:25.456608 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-8wrb6_openshift-config-operator(90a694bb-fe3e-4478-bbb4-d2be9cd4c57f)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" Feb 23 13:11:29.642469 master-0 kubenswrapper[7784]: W0223 13:11:29.642382 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f93af9_bdbb_4319_8ddb_e5458e8a9275.slice/crio-43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b WatchSource:0}: Error finding container 43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b: Status 404 returned error can't find the container with id 43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b Feb 23 13:11:29.689855 master-0 kubenswrapper[7784]: I0223 13:11:29.689772 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" podStartSLOduration=574.755498222 podStartE2EDuration="9m48.689755799s" podCreationTimestamp="2026-02-23 13:01:41 +0000 UTC" firstStartedPulling="2026-02-23 13:06:04.310357277 +0000 UTC m=+307.045210920" lastFinishedPulling="2026-02-23 13:06:18.244614814 +0000 UTC m=+320.979468497" observedRunningTime="2026-02-23 13:11:29.619813128 +0000 UTC m=+632.354666801" watchObservedRunningTime="2026-02-23 13:11:29.689755799 +0000 UTC m=+632.424609442" Feb 23 13:11:29.707737 master-0 kubenswrapper[7784]: I0223 13:11:29.704780 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vnmk2" podStartSLOduration=309.152428115 podStartE2EDuration="5m34.704762086s" podCreationTimestamp="2026-02-23 13:05:55 +0000 UTC" firstStartedPulling="2026-02-23 13:05:57.271775777 +0000 UTC m=+300.006629420" lastFinishedPulling="2026-02-23 13:06:22.824109718 +0000 UTC m=+325.558963391" observedRunningTime="2026-02-23 13:11:29.649680069 +0000 UTC m=+632.384533762" watchObservedRunningTime="2026-02-23 13:11:29.704762086 +0000 UTC m=+632.439615729" Feb 23 13:11:29.707737 master-0 kubenswrapper[7784]: I0223 13:11:29.705509 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w"] Feb 23 13:11:29.709133 master-0 kubenswrapper[7784]: I0223 13:11:29.708688 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podStartSLOduration=573.215319693 podStartE2EDuration="9m47.708679612s" podCreationTimestamp="2026-02-23 13:01:42 +0000 UTC" firstStartedPulling="2026-02-23 13:06:03.755382585 +0000 UTC m=+306.490236228" lastFinishedPulling="2026-02-23 13:06:18.248742504 +0000 UTC m=+320.983596147" observedRunningTime="2026-02-23 13:11:29.689062693 +0000 UTC m=+632.423916336" watchObservedRunningTime="2026-02-23 13:11:29.708679612 +0000 UTC m=+632.443533245" Feb 23 13:11:29.777824 master-0 kubenswrapper[7784]: I0223 13:11:29.777738 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwhpv" podStartSLOduration=309.26193639 podStartE2EDuration="5m32.777718281s" podCreationTimestamp="2026-02-23 13:05:57 +0000 UTC" firstStartedPulling="2026-02-23 13:05:59.306254436 +0000 UTC m=+302.041108079" lastFinishedPulling="2026-02-23 13:06:22.822036287 +0000 UTC m=+325.556889970" observedRunningTime="2026-02-23 13:11:29.772673548 +0000 UTC m=+632.507527201" watchObservedRunningTime="2026-02-23 13:11:29.777718281 +0000 UTC m=+632.512571924" Feb 23 13:11:29.829014 master-0 kubenswrapper[7784]: I0223 13:11:29.825950 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrtmg" podStartSLOduration=310.093331837 podStartE2EDuration="5m31.825932101s" podCreationTimestamp="2026-02-23 13:05:58 +0000 UTC" firstStartedPulling="2026-02-23 13:06:03.631729852 +0000 UTC m=+306.366583485" lastFinishedPulling="2026-02-23 13:06:25.364330076 +0000 UTC m=+328.099183749" observedRunningTime="2026-02-23 13:11:29.823773648 +0000 UTC m=+632.558627291" watchObservedRunningTime="2026-02-23 13:11:29.825932101 +0000 UTC m=+632.560785734" Feb 23 13:11:30.139116 master-0 kubenswrapper[7784]: I0223 13:11:30.138997 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w7wq9" podStartSLOduration=310.462436992 podStartE2EDuration="5m35.138867766s" podCreationTimestamp="2026-02-23 13:05:55 +0000 UTC" firstStartedPulling="2026-02-23 13:05:58.292194315 +0000 UTC m=+301.027047958" lastFinishedPulling="2026-02-23 13:06:22.968625079 +0000 UTC m=+325.703478732" observedRunningTime="2026-02-23 13:11:30.135463773 +0000 UTC m=+632.870317416" watchObservedRunningTime="2026-02-23 13:11:30.138867766 +0000 UTC m=+632.873721409" Feb 23 13:11:30.250617 master-0 kubenswrapper[7784]: I0223 13:11:30.249235 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:11:30.250617 master-0 kubenswrapper[7784]: E0223 13:11:30.249507 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:11:30.250617 master-0 kubenswrapper[7784]: I0223 13:11:30.249523 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:11:30.250617 master-0 kubenswrapper[7784]: I0223 13:11:30.249629 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:11:30.250617 master-0 kubenswrapper[7784]: I0223 13:11:30.250078 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.256372 master-0 kubenswrapper[7784]: I0223 13:11:30.252910 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:11:30.256372 master-0 kubenswrapper[7784]: I0223 13:11:30.253841 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-lz65v" Feb 23 13:11:30.266720 master-0 kubenswrapper[7784]: I0223 13:11:30.264226 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:11:30.351223 master-0 kubenswrapper[7784]: I0223 13:11:30.351154 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.351223 master-0 kubenswrapper[7784]: I0223 13:11:30.351221 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.351650 master-0 kubenswrapper[7784]: I0223 13:11:30.351256 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.405607 master-0 kubenswrapper[7784]: I0223 13:11:30.405506 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-97rhg" podStartSLOduration=324.405490559 podStartE2EDuration="5m24.405490559s" podCreationTimestamp="2026-02-23 13:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:30.404678828 +0000 UTC m=+633.139532471" watchObservedRunningTime="2026-02-23 13:11:30.405490559 +0000 UTC m=+633.140344202" Feb 23 13:11:30.452371 master-0 kubenswrapper[7784]: I0223 13:11:30.452278 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.452724 master-0 kubenswrapper[7784]: I0223 13:11:30.452427 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.452724 master-0 kubenswrapper[7784]: I0223 13:11:30.452479 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.452724 master-0 kubenswrapper[7784]: I0223 13:11:30.452581 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.452724 master-0 kubenswrapper[7784]: I0223 13:11:30.452622 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:30.702794 master-0 kubenswrapper[7784]: I0223 13:11:30.702712 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" event={"ID":"e6f93af9-bdbb-4319-8ddb-e5458e8a9275","Type":"ContainerStarted","Data":"64fb76c580bc3f08a8ea7184a31202880220b25acb2c92a61edcc5877755ff05"} Feb 23 13:11:30.702794 master-0 kubenswrapper[7784]: I0223 13:11:30.702779 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" event={"ID":"e6f93af9-bdbb-4319-8ddb-e5458e8a9275","Type":"ContainerStarted","Data":"aa54747a9febe1323fd708d9e39626bc89f56b24d4d5ffcf856434e8d2ca8f57"} Feb 23 13:11:30.702794 master-0 kubenswrapper[7784]: I0223 13:11:30.702797 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" event={"ID":"e6f93af9-bdbb-4319-8ddb-e5458e8a9275","Type":"ContainerStarted","Data":"43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b"} Feb 23 13:11:30.703641 master-0 kubenswrapper[7784]: I0223 13:11:30.702902 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:11:31.597375 master-0 kubenswrapper[7784]: I0223 13:11:31.595150 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:31.769576 master-0 kubenswrapper[7784]: I0223 13:11:31.768630 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:11:32.317578 master-0 kubenswrapper[7784]: I0223 13:11:32.317490 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:11:32.322125 master-0 kubenswrapper[7784]: I0223 13:11:32.322066 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz"] Feb 23 13:11:32.323763 master-0 kubenswrapper[7784]: I0223 13:11:32.323727 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.325710 master-0 kubenswrapper[7784]: I0223 13:11:32.325655 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:11:32.338529 master-0 kubenswrapper[7784]: I0223 13:11:32.331210 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz"] Feb 23 13:11:32.504846 master-0 kubenswrapper[7784]: I0223 13:11:32.504754 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.504846 master-0 kubenswrapper[7784]: I0223 13:11:32.504848 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.504846 master-0 kubenswrapper[7784]: I0223 13:11:32.504872 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxjf\" (UniqueName: \"kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.505308 master-0 kubenswrapper[7784]: I0223 13:11:32.504921 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.609894 master-0 kubenswrapper[7784]: I0223 13:11:32.609719 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.616292 master-0 kubenswrapper[7784]: I0223 13:11:32.610473 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.620848 master-0 kubenswrapper[7784]: I0223 13:11:32.616754 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxjf\" (UniqueName: \"kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.620848 master-0 kubenswrapper[7784]: I0223 13:11:32.616967 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.620848 master-0 kubenswrapper[7784]: I0223 13:11:32.617093 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.621382 master-0 kubenswrapper[7784]: I0223 13:11:32.621311 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.628906 master-0 kubenswrapper[7784]: I0223 13:11:32.628842 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.640745 master-0 kubenswrapper[7784]: I0223 13:11:32.640566 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxjf\" (UniqueName: \"kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.676210 master-0 kubenswrapper[7784]: I0223 13:11:32.676109 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:32.731375 master-0 kubenswrapper[7784]: I0223 13:11:32.730568 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b3e4636e-0cb6-492b-89b0-17ca9ff9e252","Type":"ContainerStarted","Data":"bf7c1f8a336dc688c688a82a7743a54d6258545018b3b12e6aea371fdcda658c"} Feb 23 13:11:32.776392 master-0 kubenswrapper[7784]: I0223 13:11:32.775804 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-2ksrm"] Feb 23 13:11:32.779368 master-0 kubenswrapper[7784]: I0223 13:11:32.777969 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.783434 master-0 kubenswrapper[7784]: I0223 13:11:32.781461 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:11:32.783434 master-0 kubenswrapper[7784]: I0223 13:11:32.781529 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-m9scs" Feb 23 13:11:32.784791 master-0 kubenswrapper[7784]: I0223 13:11:32.783935 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-2ksrm"] Feb 23 13:11:32.784791 master-0 kubenswrapper[7784]: I0223 13:11:32.784439 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:11:32.784791 master-0 kubenswrapper[7784]: I0223 13:11:32.784472 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:11:32.822706 master-0 kubenswrapper[7784]: I0223 13:11:32.821050 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.822706 master-0 kubenswrapper[7784]: I0223 13:11:32.821113 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.822706 master-0 kubenswrapper[7784]: I0223 13:11:32.821152 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8dp\" (UniqueName: \"kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.822706 master-0 kubenswrapper[7784]: I0223 13:11:32.821216 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.924123 master-0 kubenswrapper[7784]: I0223 13:11:32.923923 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.924123 master-0 kubenswrapper[7784]: I0223 13:11:32.924051 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.924123 master-0 kubenswrapper[7784]: I0223 13:11:32.924081 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.924123 master-0 kubenswrapper[7784]: I0223 13:11:32.924129 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8dp\" (UniqueName: \"kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.925305 master-0 kubenswrapper[7784]: I0223 13:11:32.925270 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.925398 master-0 kubenswrapper[7784]: E0223 13:11:32.925377 7784 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:11:32.925444 master-0 kubenswrapper[7784]: E0223 13:11:32.925422 7784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls podName:b0a29266-d968-444d-82bb-085ff1d6e506 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:33.425408632 +0000 UTC m=+636.160262265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-2ksrm" (UID: "b0a29266-d968-444d-82bb-085ff1d6e506") : secret "prometheus-operator-tls" not found Feb 23 13:11:32.983033 master-0 kubenswrapper[7784]: I0223 13:11:32.982963 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:32.983504 master-0 kubenswrapper[7784]: I0223 13:11:32.983467 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8dp\" (UniqueName: \"kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:33.184156 master-0 kubenswrapper[7784]: I0223 13:11:33.182727 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz"] Feb 23 13:11:33.434426 master-0 kubenswrapper[7784]: I0223 13:11:33.434264 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:33.438266 master-0 kubenswrapper[7784]: I0223 13:11:33.438214 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:33.695466 master-0 kubenswrapper[7784]: I0223 13:11:33.695332 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:11:33.743027 master-0 kubenswrapper[7784]: I0223 13:11:33.742502 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" event={"ID":"762249c6-b548-4733-8b78-64f73430bfbd","Type":"ContainerStarted","Data":"8d637cfb9f9d1dac15d3d489fb0eb5c9220c13e65b587ac2dfd103c260cd0dbd"} Feb 23 13:11:33.743027 master-0 kubenswrapper[7784]: I0223 13:11:33.742562 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" event={"ID":"762249c6-b548-4733-8b78-64f73430bfbd","Type":"ContainerStarted","Data":"f5aa73f30470446484267ee08c4016bd9826913f9a65531b7d70349b1291252e"} Feb 23 13:11:33.743972 master-0 kubenswrapper[7784]: I0223 13:11:33.743807 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:33.753802 master-0 kubenswrapper[7784]: I0223 13:11:33.753727 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b3e4636e-0cb6-492b-89b0-17ca9ff9e252","Type":"ContainerStarted","Data":"7ad0e23958703f89572138a68f4fe4a1db5362d40c0141e67bedc3ac0b588812"} Feb 23 13:11:33.791032 master-0 kubenswrapper[7784]: I0223 13:11:33.789108 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" podStartSLOduration=1.78908417 podStartE2EDuration="1.78908417s" podCreationTimestamp="2026-02-23 13:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:33.768076426 +0000 UTC m=+636.502930069" watchObservedRunningTime="2026-02-23 13:11:33.78908417 +0000 UTC m=+636.523937813" Feb 23 13:11:33.791032 master-0 kubenswrapper[7784]: I0223 13:11:33.790648 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=3.790642848 podStartE2EDuration="3.790642848s" podCreationTimestamp="2026-02-23 13:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:33.789009129 +0000 UTC m=+636.523862812" watchObservedRunningTime="2026-02-23 13:11:33.790642848 +0000 UTC m=+636.525496491" Feb 23 13:11:34.065629 master-0 kubenswrapper[7784]: I0223 13:11:34.065569 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:11:34.154214 master-0 kubenswrapper[7784]: I0223 13:11:34.153849 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-2ksrm"] Feb 23 13:11:34.160517 master-0 kubenswrapper[7784]: W0223 13:11:34.159764 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a29266_d968_444d_82bb_085ff1d6e506.slice/crio-ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80 WatchSource:0}: Error finding container ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80: Status 404 returned error can't find the container with id ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80 Feb 23 13:11:34.164040 master-0 kubenswrapper[7784]: I0223 13:11:34.164009 7784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:11:34.768275 master-0 kubenswrapper[7784]: I0223 13:11:34.768219 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" event={"ID":"b0a29266-d968-444d-82bb-085ff1d6e506","Type":"ContainerStarted","Data":"ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80"} Feb 23 13:11:36.785081 master-0 kubenswrapper[7784]: I0223 13:11:36.784982 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" event={"ID":"b0a29266-d968-444d-82bb-085ff1d6e506","Type":"ContainerStarted","Data":"c4d95259079e2ee884c09aa4095de52bcfeb5e46ca76eea848416412f2040bf3"} Feb 23 13:11:36.785081 master-0 kubenswrapper[7784]: I0223 13:11:36.785060 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" event={"ID":"b0a29266-d968-444d-82bb-085ff1d6e506","Type":"ContainerStarted","Data":"33724d26d1729e26c5dcd091ba1cc3c86a21ae4151d1c5414c94691591aba724"} Feb 23 13:11:36.811372 master-0 kubenswrapper[7784]: I0223 13:11:36.811241 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" podStartSLOduration=2.8206872670000003 podStartE2EDuration="4.811223411s" podCreationTimestamp="2026-02-23 13:11:32 +0000 UTC" firstStartedPulling="2026-02-23 13:11:34.16390723 +0000 UTC m=+636.898760873" lastFinishedPulling="2026-02-23 13:11:36.154443374 +0000 UTC m=+638.889297017" observedRunningTime="2026-02-23 13:11:36.809040457 +0000 UTC m=+639.543894100" watchObservedRunningTime="2026-02-23 13:11:36.811223411 +0000 UTC m=+639.546077054" Feb 23 13:11:39.092424 master-0 kubenswrapper[7784]: I0223 13:11:39.092331 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-tv6s2"] Feb 23 13:11:39.093855 master-0 kubenswrapper[7784]: I0223 13:11:39.093828 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.096140 master-0 kubenswrapper[7784]: I0223 13:11:39.096076 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:11:39.096436 master-0 kubenswrapper[7784]: I0223 13:11:39.096402 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:11:39.096740 master-0 kubenswrapper[7784]: I0223 13:11:39.096703 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6vfhg" Feb 23 13:11:39.104354 master-0 kubenswrapper[7784]: I0223 13:11:39.104248 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2"] Feb 23 13:11:39.106843 master-0 kubenswrapper[7784]: I0223 13:11:39.106805 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.111590 master-0 kubenswrapper[7784]: I0223 13:11:39.111541 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:11:39.111839 master-0 kubenswrapper[7784]: I0223 13:11:39.111812 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:11:39.111973 master-0 kubenswrapper[7784]: I0223 13:11:39.111933 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wgs7j" Feb 23 13:11:39.121503 master-0 kubenswrapper[7784]: I0223 13:11:39.121423 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2"] Feb 23 13:11:39.152376 master-0 kubenswrapper[7784]: I0223 13:11:39.149066 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-r66qv"] Feb 23 13:11:39.152376 master-0 kubenswrapper[7784]: I0223 13:11:39.150314 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.160382 master-0 kubenswrapper[7784]: I0223 13:11:39.154481 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q7zn4" Feb 23 13:11:39.160382 master-0 kubenswrapper[7784]: I0223 13:11:39.154822 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:11:39.160382 master-0 kubenswrapper[7784]: I0223 13:11:39.154946 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:11:39.160382 master-0 kubenswrapper[7784]: I0223 13:11:39.155042 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:11:39.168692 master-0 kubenswrapper[7784]: I0223 13:11:39.167262 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-r66qv"] Feb 23 13:11:39.234532 master-0 kubenswrapper[7784]: I0223 13:11:39.234450 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.234532 master-0 kubenswrapper[7784]: I0223 13:11:39.234529 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234558 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8c76\" (UniqueName: \"kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234599 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234825 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234850 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234874 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234897 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234921 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234957 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.234996 master-0 kubenswrapper[7784]: I0223 13:11:39.234982 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235009 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235028 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78mm\" (UniqueName: \"kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235056 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235086 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22p85\" (UniqueName: \"kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235118 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235141 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.235332 master-0 kubenswrapper[7784]: I0223 13:11:39.235168 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336381 master-0 kubenswrapper[7784]: I0223 13:11:39.336314 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336381 master-0 kubenswrapper[7784]: I0223 13:11:39.336381 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336411 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336426 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78mm\" (UniqueName: \"kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336454 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336487 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22p85\" (UniqueName: \"kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336508 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336537 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336562 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336596 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336612 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336627 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8c76\" (UniqueName: \"kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336648 master-0 kubenswrapper[7784]: I0223 13:11:39.336652 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.336990 master-0 kubenswrapper[7784]: I0223 13:11:39.336670 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.336990 master-0 kubenswrapper[7784]: I0223 13:11:39.336688 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336990 master-0 kubenswrapper[7784]: I0223 13:11:39.336702 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336990 master-0 kubenswrapper[7784]: I0223 13:11:39.336716 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.336990 master-0 kubenswrapper[7784]: I0223 13:11:39.336734 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.337930 master-0 kubenswrapper[7784]: I0223 13:11:39.337533 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.337930 master-0 kubenswrapper[7784]: I0223 13:11:39.337743 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.339131 master-0 kubenswrapper[7784]: I0223 13:11:39.338172 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.339131 master-0 kubenswrapper[7784]: I0223 13:11:39.338860 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.339131 master-0 kubenswrapper[7784]: I0223 13:11:39.338925 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.339131 master-0 kubenswrapper[7784]: I0223 13:11:39.338962 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.339131 master-0 kubenswrapper[7784]: I0223 13:11:39.339103 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.339721 master-0 kubenswrapper[7784]: I0223 13:11:39.339669 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.339960 master-0 kubenswrapper[7784]: I0223 13:11:39.339914 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.341950 master-0 kubenswrapper[7784]: I0223 13:11:39.341910 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.342217 master-0 kubenswrapper[7784]: I0223 13:11:39.342190 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.344613 master-0 kubenswrapper[7784]: I0223 13:11:39.344528 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.345534 master-0 kubenswrapper[7784]: I0223 13:11:39.345476 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.347081 master-0 kubenswrapper[7784]: I0223 13:11:39.347027 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.347141 master-0 kubenswrapper[7784]: I0223 13:11:39.347048 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.358464 master-0 kubenswrapper[7784]: I0223 13:11:39.358423 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8c76\" (UniqueName: \"kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.360818 master-0 kubenswrapper[7784]: I0223 13:11:39.360780 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22p85\" (UniqueName: \"kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.361088 master-0 kubenswrapper[7784]: I0223 13:11:39.361027 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78mm\" (UniqueName: \"kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.423401 master-0 kubenswrapper[7784]: I0223 13:11:39.423314 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:11:39.445649 master-0 kubenswrapper[7784]: I0223 13:11:39.440367 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:11:39.474633 master-0 kubenswrapper[7784]: I0223 13:11:39.474554 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:11:39.491168 master-0 kubenswrapper[7784]: W0223 13:11:39.491088 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8b0e50_59ee_44a9_9a66_8febb833b771.slice/crio-34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673 WatchSource:0}: Error finding container 34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673: Status 404 returned error can't find the container with id 34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673 Feb 23 13:11:39.515233 master-0 kubenswrapper[7784]: I0223 13:11:39.515179 7784 scope.go:117] "RemoveContainer" containerID="0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" Feb 23 13:11:39.515484 master-0 kubenswrapper[7784]: E0223 13:11:39.515452 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-8wrb6_openshift-config-operator(90a694bb-fe3e-4478-bbb4-d2be9cd4c57f)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" podUID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" Feb 23 13:11:39.806040 master-0 kubenswrapper[7784]: I0223 13:11:39.805935 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tv6s2" event={"ID":"ae8b0e50-59ee-44a9-9a66-8febb833b771","Type":"ContainerStarted","Data":"34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673"} Feb 23 13:11:39.940304 master-0 kubenswrapper[7784]: I0223 13:11:39.940246 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2"] Feb 23 13:11:39.945689 master-0 kubenswrapper[7784]: W0223 13:11:39.945585 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ccbaed9_ab28_47c0_a585_648b9251fd11.slice/crio-404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14 WatchSource:0}: Error finding container 404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14: Status 404 returned error can't find the container with id 404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14 Feb 23 13:11:40.043090 master-0 kubenswrapper[7784]: I0223 13:11:40.042988 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-r66qv"] Feb 23 13:11:40.049867 master-0 kubenswrapper[7784]: W0223 13:11:40.049759 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea16701_bd22_4fc0_90ea_f114b52574f8.slice/crio-752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72 WatchSource:0}: Error finding container 752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72: Status 404 returned error can't find the container with id 752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72 Feb 23 13:11:40.820274 master-0 kubenswrapper[7784]: I0223 13:11:40.820207 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" event={"ID":"9ea16701-bd22-4fc0-90ea-f114b52574f8","Type":"ContainerStarted","Data":"752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72"} Feb 23 13:11:40.823144 master-0 kubenswrapper[7784]: I0223 13:11:40.823008 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" event={"ID":"3ccbaed9-ab28-47c0-a585-648b9251fd11","Type":"ContainerStarted","Data":"90413b3f6fbd13425e0773aedac776571ca3dac2ae6bcb1bb6f972fc375e14b4"} Feb 23 13:11:40.823252 master-0 kubenswrapper[7784]: I0223 13:11:40.823190 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" event={"ID":"3ccbaed9-ab28-47c0-a585-648b9251fd11","Type":"ContainerStarted","Data":"d99d204109e0cd1a98ded70eac3636a3ac0c0ecdac0cae9b137992afcbd81520"} Feb 23 13:11:40.823330 master-0 kubenswrapper[7784]: I0223 13:11:40.823218 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" event={"ID":"3ccbaed9-ab28-47c0-a585-648b9251fd11","Type":"ContainerStarted","Data":"404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14"} Feb 23 13:11:41.834647 master-0 kubenswrapper[7784]: I0223 13:11:41.834554 7784 generic.go:334] "Generic (PLEG): container finished" podID="ae8b0e50-59ee-44a9-9a66-8febb833b771" containerID="42b787e83faf9100258d1cfdca0f7aae6b32dce8da26a5afef3b43b0d29e85d2" exitCode=0 Feb 23 13:11:41.836074 master-0 kubenswrapper[7784]: I0223 13:11:41.834646 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tv6s2" event={"ID":"ae8b0e50-59ee-44a9-9a66-8febb833b771","Type":"ContainerDied","Data":"42b787e83faf9100258d1cfdca0f7aae6b32dce8da26a5afef3b43b0d29e85d2"} Feb 23 13:11:42.849742 master-0 kubenswrapper[7784]: I0223 13:11:42.849648 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" event={"ID":"9ea16701-bd22-4fc0-90ea-f114b52574f8","Type":"ContainerStarted","Data":"2249c53611d300115e57bfca580f915f909b43094542c96b7df1b28217101e13"} Feb 23 13:11:42.849742 master-0 kubenswrapper[7784]: I0223 13:11:42.849740 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" event={"ID":"9ea16701-bd22-4fc0-90ea-f114b52574f8","Type":"ContainerStarted","Data":"e4bce2724a6a2455a1be5e96be2113930417aa286033d8b85e9cf121eb569ad2"} Feb 23 13:11:42.849742 master-0 kubenswrapper[7784]: I0223 13:11:42.849762 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" event={"ID":"9ea16701-bd22-4fc0-90ea-f114b52574f8","Type":"ContainerStarted","Data":"82e0785158bca10ea39c8b8ab8b6022ad799ae09d222dae9ebb8d717b52d2863"} Feb 23 13:11:42.853625 master-0 kubenswrapper[7784]: I0223 13:11:42.853565 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" event={"ID":"3ccbaed9-ab28-47c0-a585-648b9251fd11","Type":"ContainerStarted","Data":"730d200c30a2a39e0207fde4b6a706362a25c1599a9f0710871044e2bc2067bd"} Feb 23 13:11:42.858089 master-0 kubenswrapper[7784]: I0223 13:11:42.858027 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tv6s2" event={"ID":"ae8b0e50-59ee-44a9-9a66-8febb833b771","Type":"ContainerStarted","Data":"a25b24c3b58f249d29dfa1ce7cd14b4a5d6fe7b2155787d87927428100c1a899"} Feb 23 13:11:42.858225 master-0 kubenswrapper[7784]: I0223 13:11:42.858081 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tv6s2" event={"ID":"ae8b0e50-59ee-44a9-9a66-8febb833b771","Type":"ContainerStarted","Data":"1502f443c61c43e888409a6e5bb18cc1c48195562e195228abc1729ebf61dc93"} Feb 23 13:11:42.924996 master-0 kubenswrapper[7784]: I0223 13:11:42.921611 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" podStartSLOduration=2.168856701 podStartE2EDuration="3.921570067s" podCreationTimestamp="2026-02-23 13:11:39 +0000 UTC" firstStartedPulling="2026-02-23 13:11:40.06185109 +0000 UTC m=+642.796704733" lastFinishedPulling="2026-02-23 13:11:41.814564446 +0000 UTC m=+644.549418099" observedRunningTime="2026-02-23 13:11:42.882723757 +0000 UTC m=+645.617577430" watchObservedRunningTime="2026-02-23 13:11:42.921570067 +0000 UTC m=+645.656423750" Feb 23 13:11:42.929548 master-0 kubenswrapper[7784]: I0223 13:11:42.929400 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-tv6s2" podStartSLOduration=2.761474336 podStartE2EDuration="3.929333686s" podCreationTimestamp="2026-02-23 13:11:39 +0000 UTC" firstStartedPulling="2026-02-23 13:11:39.495827023 +0000 UTC m=+642.230680676" lastFinishedPulling="2026-02-23 13:11:40.663686373 +0000 UTC m=+643.398540026" observedRunningTime="2026-02-23 13:11:42.919615279 +0000 UTC m=+645.654468982" watchObservedRunningTime="2026-02-23 13:11:42.929333686 +0000 UTC m=+645.664187379" Feb 23 13:11:42.955481 master-0 kubenswrapper[7784]: I0223 13:11:42.955223 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" podStartSLOduration=2.450012728 podStartE2EDuration="3.955177239s" podCreationTimestamp="2026-02-23 13:11:39 +0000 UTC" firstStartedPulling="2026-02-23 13:11:40.3112352 +0000 UTC m=+643.046088843" lastFinishedPulling="2026-02-23 13:11:41.816399701 +0000 UTC m=+644.551253354" observedRunningTime="2026-02-23 13:11:42.944725943 +0000 UTC m=+645.679579596" watchObservedRunningTime="2026-02-23 13:11:42.955177239 +0000 UTC m=+645.690030922" Feb 23 13:11:44.452176 master-0 kubenswrapper[7784]: I0223 13:11:44.452092 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:11:44.453193 master-0 kubenswrapper[7784]: I0223 13:11:44.453158 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.456546 master-0 kubenswrapper[7784]: I0223 13:11:44.456251 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:11:44.457007 master-0 kubenswrapper[7784]: I0223 13:11:44.456819 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zjvhw" Feb 23 13:11:44.457651 master-0 kubenswrapper[7784]: I0223 13:11:44.457549 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:11:44.457651 master-0 kubenswrapper[7784]: I0223 13:11:44.457596 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:11:44.457961 master-0 kubenswrapper[7784]: I0223 13:11:44.457624 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-639sbo1a4as7e" Feb 23 13:11:44.459572 master-0 kubenswrapper[7784]: I0223 13:11:44.459492 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:11:44.478803 master-0 kubenswrapper[7784]: I0223 13:11:44.478729 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:11:44.551980 master-0 kubenswrapper[7784]: I0223 13:11:44.551916 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552380 master-0 kubenswrapper[7784]: I0223 13:11:44.552328 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552484 master-0 kubenswrapper[7784]: I0223 13:11:44.552471 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552608 master-0 kubenswrapper[7784]: I0223 13:11:44.552593 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552718 master-0 kubenswrapper[7784]: I0223 13:11:44.552705 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552872 master-0 kubenswrapper[7784]: I0223 13:11:44.552858 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.552981 master-0 kubenswrapper[7784]: I0223 13:11:44.552965 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.655316 master-0 kubenswrapper[7784]: I0223 13:11:44.655250 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.655621 master-0 kubenswrapper[7784]: I0223 13:11:44.655604 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.655715 master-0 kubenswrapper[7784]: I0223 13:11:44.655701 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.655917 master-0 kubenswrapper[7784]: I0223 13:11:44.655899 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.656021 master-0 kubenswrapper[7784]: I0223 13:11:44.656004 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.656135 master-0 kubenswrapper[7784]: I0223 13:11:44.656119 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.656226 master-0 kubenswrapper[7784]: I0223 13:11:44.656210 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.657490 master-0 kubenswrapper[7784]: I0223 13:11:44.657401 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.657975 master-0 kubenswrapper[7784]: I0223 13:11:44.657956 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.658402 master-0 kubenswrapper[7784]: I0223 13:11:44.658310 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.659495 master-0 kubenswrapper[7784]: I0223 13:11:44.659432 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.662463 master-0 kubenswrapper[7784]: I0223 13:11:44.662397 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.662847 master-0 kubenswrapper[7784]: I0223 13:11:44.662782 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.692490 master-0 kubenswrapper[7784]: I0223 13:11:44.692416 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:44.784203 master-0 kubenswrapper[7784]: I0223 13:11:44.784127 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:11:45.261234 master-0 kubenswrapper[7784]: I0223 13:11:45.261169 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:11:45.270035 master-0 kubenswrapper[7784]: W0223 13:11:45.269970 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ecd69f_3f1b_41d7_ba1f_225acaa735d7.slice/crio-aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df WatchSource:0}: Error finding container aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df: Status 404 returned error can't find the container with id aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df Feb 23 13:11:45.885543 master-0 kubenswrapper[7784]: I0223 13:11:45.885429 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" event={"ID":"65ecd69f-3f1b-41d7-ba1f-225acaa735d7","Type":"ContainerStarted","Data":"aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df"} Feb 23 13:11:46.467476 master-0 kubenswrapper[7784]: I0223 13:11:46.467371 7784 patch_prober.go:28] interesting pod/machine-config-daemon-q8bjq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:11:46.467825 master-0 kubenswrapper[7784]: I0223 13:11:46.467475 7784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" podUID="57803492-e1dd-4994-8330-1e9b393d54fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:11:46.893377 master-0 kubenswrapper[7784]: I0223 13:11:46.893308 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" event={"ID":"65ecd69f-3f1b-41d7-ba1f-225acaa735d7","Type":"ContainerStarted","Data":"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438"} Feb 23 13:11:46.916925 master-0 kubenswrapper[7784]: I0223 13:11:46.916777 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" podStartSLOduration=1.435216118 podStartE2EDuration="2.91673413s" podCreationTimestamp="2026-02-23 13:11:44 +0000 UTC" firstStartedPulling="2026-02-23 13:11:45.272286082 +0000 UTC m=+648.007139725" lastFinishedPulling="2026-02-23 13:11:46.753804094 +0000 UTC m=+649.488657737" observedRunningTime="2026-02-23 13:11:46.912440315 +0000 UTC m=+649.647293958" watchObservedRunningTime="2026-02-23 13:11:46.91673413 +0000 UTC m=+649.651587773" Feb 23 13:11:48.515698 master-0 kubenswrapper[7784]: I0223 13:11:48.515621 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:11:48.515698 master-0 kubenswrapper[7784]: I0223 13:11:48.515670 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:11:48.533430 master-0 kubenswrapper[7784]: I0223 13:11:48.533309 7784 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 23 13:11:48.534502 master-0 kubenswrapper[7784]: I0223 13:11:48.534448 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:11:48.545966 master-0 kubenswrapper[7784]: I0223 13:11:48.545893 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:11:48.563248 master-0 kubenswrapper[7784]: I0223 13:11:48.563194 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:11:48.912726 master-0 kubenswrapper[7784]: I0223 13:11:48.912565 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:11:48.912726 master-0 kubenswrapper[7784]: I0223 13:11:48.912624 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a06256c5-5d92-4413-8b6d-5489a6baccd5" Feb 23 13:11:52.942446 master-0 kubenswrapper[7784]: I0223 13:11:52.942305 7784 generic.go:334] "Generic (PLEG): container finished" podID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerID="530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" exitCode=0 Feb 23 13:11:52.942446 master-0 kubenswrapper[7784]: I0223 13:11:52.942385 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerDied","Data":"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88"} Feb 23 13:11:52.942446 master-0 kubenswrapper[7784]: I0223 13:11:52.942444 7784 scope.go:117] "RemoveContainer" containerID="269b02e4bdd6edd8e8fdf7d10edb62714b47f3af26d18d46c35faad3badc04c5" Feb 23 13:11:53.515254 master-0 kubenswrapper[7784]: I0223 13:11:53.515197 7784 scope.go:117] "RemoveContainer" containerID="0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" Feb 23 13:11:53.951548 master-0 kubenswrapper[7784]: I0223 13:11:53.951426 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerStarted","Data":"cf7e22147b726d7bb900d92e5a79955383f2346325db290ec3e45f21c5be3266"} Feb 23 13:11:53.955140 master-0 kubenswrapper[7784]: I0223 13:11:53.955111 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/2.log" Feb 23 13:11:53.958548 master-0 kubenswrapper[7784]: I0223 13:11:53.958504 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" event={"ID":"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f","Type":"ContainerStarted","Data":"70d18626a8b59c24b7cef3bf9fc59d9b8caecc11500290983bc7a8acb29dfa19"} Feb 23 13:11:53.959160 master-0 kubenswrapper[7784]: I0223 13:11:53.959105 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:54.016702 master-0 kubenswrapper[7784]: I0223 13:11:54.016622 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=6.016598034 podStartE2EDuration="6.016598034s" podCreationTimestamp="2026-02-23 13:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:54.012496363 +0000 UTC m=+656.747350016" watchObservedRunningTime="2026-02-23 13:11:54.016598034 +0000 UTC m=+656.751451717" Feb 23 13:11:54.698079 master-0 kubenswrapper[7784]: I0223 13:11:54.698008 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:11:54.700661 master-0 kubenswrapper[7784]: I0223 13:11:54.700629 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:54.700661 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:54.700661 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:54.700661 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:54.700824 master-0 kubenswrapper[7784]: I0223 13:11:54.700686 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:55.700796 master-0 kubenswrapper[7784]: I0223 13:11:55.700687 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:55.700796 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:55.700796 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:55.700796 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:55.700796 master-0 kubenswrapper[7784]: I0223 13:11:55.700755 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:56.701063 master-0 kubenswrapper[7784]: I0223 13:11:56.700959 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:56.701063 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:56.701063 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:56.701063 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:56.701791 master-0 kubenswrapper[7784]: I0223 13:11:56.701098 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:56.913201 master-0 kubenswrapper[7784]: I0223 13:11:56.913113 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:11:57.701566 master-0 kubenswrapper[7784]: I0223 13:11:57.701469 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:57.701566 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:57.701566 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:57.701566 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:57.703054 master-0 kubenswrapper[7784]: I0223 13:11:57.701585 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:58.700718 master-0 kubenswrapper[7784]: I0223 13:11:58.700569 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:58.700718 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:58.700718 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:58.700718 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:58.701419 master-0 kubenswrapper[7784]: I0223 13:11:58.700729 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:59.701893 master-0 kubenswrapper[7784]: I0223 13:11:59.701815 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:59.701893 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:11:59.701893 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:11:59.701893 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:11:59.703323 master-0 kubenswrapper[7784]: I0223 13:11:59.703274 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:00.701872 master-0 kubenswrapper[7784]: I0223 13:12:00.701657 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:00.701872 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:00.701872 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:00.701872 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:00.701872 master-0 kubenswrapper[7784]: I0223 13:12:00.701787 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:01.701196 master-0 kubenswrapper[7784]: I0223 13:12:01.701088 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:01.701196 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:01.701196 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:01.701196 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:01.701879 master-0 kubenswrapper[7784]: I0223 13:12:01.701201 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:02.701254 master-0 kubenswrapper[7784]: I0223 13:12:02.701097 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:02.701254 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:02.701254 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:02.701254 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:02.701254 master-0 kubenswrapper[7784]: I0223 13:12:02.701227 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:03.698565 master-0 kubenswrapper[7784]: I0223 13:12:03.698502 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:12:03.701779 master-0 kubenswrapper[7784]: I0223 13:12:03.701748 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:03.701779 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:03.701779 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:03.701779 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:03.702300 master-0 kubenswrapper[7784]: I0223 13:12:03.701797 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:04.700999 master-0 kubenswrapper[7784]: I0223 13:12:04.700946 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:04.700999 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:04.700999 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:04.700999 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:04.701450 master-0 kubenswrapper[7784]: I0223 13:12:04.701423 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:04.784847 master-0 kubenswrapper[7784]: I0223 13:12:04.784728 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:12:04.784847 master-0 kubenswrapper[7784]: I0223 13:12:04.784835 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:12:05.701384 master-0 kubenswrapper[7784]: I0223 13:12:05.701236 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:05.701384 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:05.701384 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:05.701384 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:05.701906 master-0 kubenswrapper[7784]: I0223 13:12:05.701393 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:06.399708 master-0 kubenswrapper[7784]: I0223 13:12:06.399626 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:12:06.400571 master-0 kubenswrapper[7784]: I0223 13:12:06.399966 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://e83f60b44b83cfd6e3f9aea87eba10757c2f61020bb495edff5a188472446875" gracePeriod=30 Feb 23 13:12:06.400571 master-0 kubenswrapper[7784]: I0223 13:12:06.400170 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://d30622693465b0b62d620607efa00658fed43c117d15217ddcd12f4e9ddc2419" gracePeriod=30 Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401218 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401586 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401600 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401615 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401621 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401637 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401643 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401656 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401662 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401672 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401678 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401688 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401695 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401703 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401719 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: E0223 13:12:06.401728 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.401800 master-0 kubenswrapper[7784]: I0223 13:12:06.401734 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.401917 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.401932 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.401943 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402013 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402027 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402035 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402043 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: E0223 13:12:06.402176 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402184 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: E0223 13:12:06.402193 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402200 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402324 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402354 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:12:06.403105 master-0 kubenswrapper[7784]: I0223 13:12:06.402369 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:12:06.404291 master-0 kubenswrapper[7784]: I0223 13:12:06.403297 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.424063 master-0 kubenswrapper[7784]: I0223 13:12:06.423989 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.424369 master-0 kubenswrapper[7784]: I0223 13:12:06.424083 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.457910 master-0 kubenswrapper[7784]: I0223 13:12:06.457808 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:12:06.528588 master-0 kubenswrapper[7784]: I0223 13:12:06.526438 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.528588 master-0 kubenswrapper[7784]: I0223 13:12:06.526561 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.528588 master-0 kubenswrapper[7784]: I0223 13:12:06.526644 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.528588 master-0 kubenswrapper[7784]: I0223 13:12:06.526940 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.585223 master-0 kubenswrapper[7784]: I0223 13:12:06.585168 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:12:06.628036 master-0 kubenswrapper[7784]: I0223 13:12:06.627966 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:12:06.628036 master-0 kubenswrapper[7784]: I0223 13:12:06.628023 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:12:06.628036 master-0 kubenswrapper[7784]: I0223 13:12:06.628042 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:12:06.628278 master-0 kubenswrapper[7784]: I0223 13:12:06.628074 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:12:06.628278 master-0 kubenswrapper[7784]: I0223 13:12:06.628106 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:12:06.628278 master-0 kubenswrapper[7784]: I0223 13:12:06.628165 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:06.628417 master-0 kubenswrapper[7784]: I0223 13:12:06.628269 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:06.628417 master-0 kubenswrapper[7784]: I0223 13:12:06.628191 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:06.628483 master-0 kubenswrapper[7784]: I0223 13:12:06.628191 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:06.628483 master-0 kubenswrapper[7784]: I0223 13:12:06.628318 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:06.628907 master-0 kubenswrapper[7784]: I0223 13:12:06.628868 7784 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:06.628907 master-0 kubenswrapper[7784]: I0223 13:12:06.628897 7784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:06.628990 master-0 kubenswrapper[7784]: I0223 13:12:06.628910 7784 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:06.628990 master-0 kubenswrapper[7784]: I0223 13:12:06.628929 7784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:06.628990 master-0 kubenswrapper[7784]: I0223 13:12:06.628940 7784 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:06.638402 master-0 kubenswrapper[7784]: I0223 13:12:06.638311 7784 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="28e669b6-6091-40aa-8eab-26d2a60be87e" Feb 23 13:12:06.700438 master-0 kubenswrapper[7784]: I0223 13:12:06.700266 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:06.700438 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:06.700438 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:06.700438 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:06.700715 master-0 kubenswrapper[7784]: I0223 13:12:06.700412 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:06.750798 master-0 kubenswrapper[7784]: I0223 13:12:06.750731 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:06.789048 master-0 kubenswrapper[7784]: W0223 13:12:06.788947 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b2cc5255d43139b8419449b8700857e.slice/crio-534fcfa22aa78494656c9b2cf37c44e47934f98b0bf5a44044dc03b52be24f83 WatchSource:0}: Error finding container 534fcfa22aa78494656c9b2cf37c44e47934f98b0bf5a44044dc03b52be24f83: Status 404 returned error can't find the container with id 534fcfa22aa78494656c9b2cf37c44e47934f98b0bf5a44044dc03b52be24f83 Feb 23 13:12:07.076079 master-0 kubenswrapper[7784]: I0223 13:12:07.076024 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0b2cc5255d43139b8419449b8700857e","Type":"ContainerStarted","Data":"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb"} Feb 23 13:12:07.076079 master-0 kubenswrapper[7784]: I0223 13:12:07.076075 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0b2cc5255d43139b8419449b8700857e","Type":"ContainerStarted","Data":"534fcfa22aa78494656c9b2cf37c44e47934f98b0bf5a44044dc03b52be24f83"} Feb 23 13:12:07.078212 master-0 kubenswrapper[7784]: I0223 13:12:07.078174 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="d30622693465b0b62d620607efa00658fed43c117d15217ddcd12f4e9ddc2419" exitCode=0 Feb 23 13:12:07.078212 master-0 kubenswrapper[7784]: I0223 13:12:07.078207 7784 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="e83f60b44b83cfd6e3f9aea87eba10757c2f61020bb495edff5a188472446875" exitCode=0 Feb 23 13:12:07.078369 master-0 kubenswrapper[7784]: I0223 13:12:07.078259 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7" Feb 23 13:12:07.078369 master-0 kubenswrapper[7784]: I0223 13:12:07.078296 7784 scope.go:117] "RemoveContainer" containerID="b5a81524e936f8406198a4fef17ebac4451d4192d941592802280834e23d1390" Feb 23 13:12:07.078369 master-0 kubenswrapper[7784]: I0223 13:12:07.078299 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:12:07.079650 master-0 kubenswrapper[7784]: I0223 13:12:07.079604 7784 generic.go:334] "Generic (PLEG): container finished" podID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerID="7ad0e23958703f89572138a68f4fe4a1db5362d40c0141e67bedc3ac0b588812" exitCode=0 Feb 23 13:12:07.079650 master-0 kubenswrapper[7784]: I0223 13:12:07.079633 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b3e4636e-0cb6-492b-89b0-17ca9ff9e252","Type":"ContainerDied","Data":"7ad0e23958703f89572138a68f4fe4a1db5362d40c0141e67bedc3ac0b588812"} Feb 23 13:12:07.118962 master-0 kubenswrapper[7784]: I0223 13:12:07.106938 7784 scope.go:117] "RemoveContainer" containerID="803106da6099883ee98c3575d18f2f07b351da86541aaf47ff092d2a33469b54" Feb 23 13:12:07.539377 master-0 kubenswrapper[7784]: I0223 13:12:07.539283 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 23 13:12:07.540146 master-0 kubenswrapper[7784]: I0223 13:12:07.539965 7784 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 23 13:12:07.565133 master-0 kubenswrapper[7784]: I0223 13:12:07.562138 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:12:07.565472 master-0 kubenswrapper[7784]: I0223 13:12:07.565185 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:12:07.565472 master-0 kubenswrapper[7784]: I0223 13:12:07.565210 7784 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="28e669b6-6091-40aa-8eab-26d2a60be87e" Feb 23 13:12:07.570185 master-0 kubenswrapper[7784]: I0223 13:12:07.570106 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:12:07.570185 master-0 kubenswrapper[7784]: I0223 13:12:07.570177 7784 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="28e669b6-6091-40aa-8eab-26d2a60be87e" Feb 23 13:12:07.702105 master-0 kubenswrapper[7784]: I0223 13:12:07.702000 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:07.702105 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:07.702105 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:07.702105 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:07.702105 master-0 kubenswrapper[7784]: I0223 13:12:07.702078 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:08.091062 master-0 kubenswrapper[7784]: I0223 13:12:08.090850 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0b2cc5255d43139b8419449b8700857e","Type":"ContainerStarted","Data":"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94"} Feb 23 13:12:08.091062 master-0 kubenswrapper[7784]: I0223 13:12:08.090910 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0b2cc5255d43139b8419449b8700857e","Type":"ContainerStarted","Data":"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c"} Feb 23 13:12:08.091062 master-0 kubenswrapper[7784]: I0223 13:12:08.090925 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0b2cc5255d43139b8419449b8700857e","Type":"ContainerStarted","Data":"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1"} Feb 23 13:12:08.121138 master-0 kubenswrapper[7784]: I0223 13:12:08.121024 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.121005269 podStartE2EDuration="2.121005269s" podCreationTimestamp="2026-02-23 13:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:08.118140228 +0000 UTC m=+670.852993891" watchObservedRunningTime="2026-02-23 13:12:08.121005269 +0000 UTC m=+670.855858912" Feb 23 13:12:08.409355 master-0 kubenswrapper[7784]: I0223 13:12:08.409299 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:12:08.455531 master-0 kubenswrapper[7784]: I0223 13:12:08.455457 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock\") pod \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " Feb 23 13:12:08.455822 master-0 kubenswrapper[7784]: I0223 13:12:08.455629 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access\") pod \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " Feb 23 13:12:08.455822 master-0 kubenswrapper[7784]: I0223 13:12:08.455775 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir\") pod \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\" (UID: \"b3e4636e-0cb6-492b-89b0-17ca9ff9e252\") " Feb 23 13:12:08.456328 master-0 kubenswrapper[7784]: I0223 13:12:08.456291 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3e4636e-0cb6-492b-89b0-17ca9ff9e252" (UID: "b3e4636e-0cb6-492b-89b0-17ca9ff9e252"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:08.456403 master-0 kubenswrapper[7784]: I0223 13:12:08.456385 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock" (OuterVolumeSpecName: "var-lock") pod "b3e4636e-0cb6-492b-89b0-17ca9ff9e252" (UID: "b3e4636e-0cb6-492b-89b0-17ca9ff9e252"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:08.461533 master-0 kubenswrapper[7784]: I0223 13:12:08.461476 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3e4636e-0cb6-492b-89b0-17ca9ff9e252" (UID: "b3e4636e-0cb6-492b-89b0-17ca9ff9e252"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:08.559302 master-0 kubenswrapper[7784]: I0223 13:12:08.559209 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:08.559302 master-0 kubenswrapper[7784]: I0223 13:12:08.559272 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:08.559302 master-0 kubenswrapper[7784]: I0223 13:12:08.559286 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3e4636e-0cb6-492b-89b0-17ca9ff9e252-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:08.702508 master-0 kubenswrapper[7784]: I0223 13:12:08.702258 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:08.702508 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:08.702508 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:08.702508 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:08.702508 master-0 kubenswrapper[7784]: I0223 13:12:08.702424 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:09.105802 master-0 kubenswrapper[7784]: I0223 13:12:09.105722 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:12:09.105802 master-0 kubenswrapper[7784]: I0223 13:12:09.105735 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b3e4636e-0cb6-492b-89b0-17ca9ff9e252","Type":"ContainerDied","Data":"bf7c1f8a336dc688c688a82a7743a54d6258545018b3b12e6aea371fdcda658c"} Feb 23 13:12:09.105802 master-0 kubenswrapper[7784]: I0223 13:12:09.105823 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7c1f8a336dc688c688a82a7743a54d6258545018b3b12e6aea371fdcda658c" Feb 23 13:12:09.700863 master-0 kubenswrapper[7784]: I0223 13:12:09.700787 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:09.700863 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:09.700863 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:09.700863 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:09.700863 master-0 kubenswrapper[7784]: I0223 13:12:09.700855 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:10.701396 master-0 kubenswrapper[7784]: I0223 13:12:10.701243 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:10.701396 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:10.701396 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:10.701396 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:10.702511 master-0 kubenswrapper[7784]: I0223 13:12:10.701436 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:11.701517 master-0 kubenswrapper[7784]: I0223 13:12:11.701444 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:11.701517 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:11.701517 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:11.701517 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:11.701517 master-0 kubenswrapper[7784]: I0223 13:12:11.701517 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:12.700573 master-0 kubenswrapper[7784]: I0223 13:12:12.700482 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:12.700573 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:12.700573 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:12.700573 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:12.700845 master-0 kubenswrapper[7784]: I0223 13:12:12.700606 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:13.700042 master-0 kubenswrapper[7784]: I0223 13:12:13.699992 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:13.700042 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:13.700042 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:13.700042 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:13.700707 master-0 kubenswrapper[7784]: I0223 13:12:13.700644 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:14.700695 master-0 kubenswrapper[7784]: I0223 13:12:14.700605 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:14.700695 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:14.700695 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:14.700695 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:14.701247 master-0 kubenswrapper[7784]: I0223 13:12:14.700734 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:15.025881 master-0 kubenswrapper[7784]: I0223 13:12:15.025784 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:15.026333 master-0 kubenswrapper[7784]: E0223 13:12:15.026290 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:12:15.026419 master-0 kubenswrapper[7784]: I0223 13:12:15.026325 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:12:15.026654 master-0 kubenswrapper[7784]: I0223 13:12:15.026612 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:12:15.027632 master-0 kubenswrapper[7784]: I0223 13:12:15.027586 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.033577 master-0 kubenswrapper[7784]: I0223 13:12:15.033540 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 23 13:12:15.033776 master-0 kubenswrapper[7784]: I0223 13:12:15.033740 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-654sf" Feb 23 13:12:15.042691 master-0 kubenswrapper[7784]: I0223 13:12:15.042634 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:15.063121 master-0 kubenswrapper[7784]: I0223 13:12:15.063050 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.063423 master-0 kubenswrapper[7784]: I0223 13:12:15.063139 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.063423 master-0 kubenswrapper[7784]: I0223 13:12:15.063335 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.164553 master-0 kubenswrapper[7784]: I0223 13:12:15.164476 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.164788 master-0 kubenswrapper[7784]: I0223 13:12:15.164594 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.164788 master-0 kubenswrapper[7784]: I0223 13:12:15.164667 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.164887 master-0 kubenswrapper[7784]: I0223 13:12:15.164834 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.165056 master-0 kubenswrapper[7784]: I0223 13:12:15.164971 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.187371 master-0 kubenswrapper[7784]: I0223 13:12:15.187269 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.358061 master-0 kubenswrapper[7784]: I0223 13:12:15.357895 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:15.701535 master-0 kubenswrapper[7784]: I0223 13:12:15.701283 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:15.701535 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:15.701535 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:15.701535 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:15.701535 master-0 kubenswrapper[7784]: I0223 13:12:15.701472 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:15.890697 master-0 kubenswrapper[7784]: I0223 13:12:15.890616 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:15.892370 master-0 kubenswrapper[7784]: W0223 13:12:15.892251 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb3f51426_3809_4c1d_8628_0aca7873bde2.slice/crio-52140ddac2ecd65b612c1aa8374928a874f32a4955eb244f4f17159dea7aea64 WatchSource:0}: Error finding container 52140ddac2ecd65b612c1aa8374928a874f32a4955eb244f4f17159dea7aea64: Status 404 returned error can't find the container with id 52140ddac2ecd65b612c1aa8374928a874f32a4955eb244f4f17159dea7aea64 Feb 23 13:12:16.164622 master-0 kubenswrapper[7784]: I0223 13:12:16.164476 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"b3f51426-3809-4c1d-8628-0aca7873bde2","Type":"ContainerStarted","Data":"52140ddac2ecd65b612c1aa8374928a874f32a4955eb244f4f17159dea7aea64"} Feb 23 13:12:16.702924 master-0 kubenswrapper[7784]: I0223 13:12:16.702803 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:16.702924 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:16.702924 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:16.702924 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:16.703616 master-0 kubenswrapper[7784]: I0223 13:12:16.703539 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:16.752205 master-0 kubenswrapper[7784]: I0223 13:12:16.752070 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:16.752205 master-0 kubenswrapper[7784]: I0223 13:12:16.752206 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:16.752737 master-0 kubenswrapper[7784]: I0223 13:12:16.752241 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:16.752737 master-0 kubenswrapper[7784]: I0223 13:12:16.752265 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:16.760209 master-0 kubenswrapper[7784]: I0223 13:12:16.760139 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:16.762221 master-0 kubenswrapper[7784]: I0223 13:12:16.760925 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:17.177548 master-0 kubenswrapper[7784]: I0223 13:12:17.177393 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"b3f51426-3809-4c1d-8628-0aca7873bde2","Type":"ContainerStarted","Data":"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84"} Feb 23 13:12:17.183683 master-0 kubenswrapper[7784]: I0223 13:12:17.183445 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:17.187505 master-0 kubenswrapper[7784]: I0223 13:12:17.187435 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:12:17.210813 master-0 kubenswrapper[7784]: I0223 13:12:17.210663 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=2.210629529 podStartE2EDuration="2.210629529s" podCreationTimestamp="2026-02-23 13:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:17.20326514 +0000 UTC m=+679.938118823" watchObservedRunningTime="2026-02-23 13:12:17.210629529 +0000 UTC m=+679.945483212" Feb 23 13:12:17.701311 master-0 kubenswrapper[7784]: I0223 13:12:17.701189 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:17.701311 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:17.701311 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:17.701311 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:17.701669 master-0 kubenswrapper[7784]: I0223 13:12:17.701319 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:18.701464 master-0 kubenswrapper[7784]: I0223 13:12:18.701385 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:18.701464 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:18.701464 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:18.701464 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:18.702446 master-0 kubenswrapper[7784]: I0223 13:12:18.701488 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:19.701564 master-0 kubenswrapper[7784]: I0223 13:12:19.701465 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:19.701564 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:19.701564 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:19.701564 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:19.702568 master-0 kubenswrapper[7784]: I0223 13:12:19.701568 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:20.701482 master-0 kubenswrapper[7784]: I0223 13:12:20.701327 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:20.701482 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:20.701482 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:20.701482 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:20.701482 master-0 kubenswrapper[7784]: I0223 13:12:20.701447 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:21.701333 master-0 kubenswrapper[7784]: I0223 13:12:21.701208 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:21.701333 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:21.701333 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:21.701333 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:21.702573 master-0 kubenswrapper[7784]: I0223 13:12:21.701380 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:22.701232 master-0 kubenswrapper[7784]: I0223 13:12:22.701157 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:22.701232 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:22.701232 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:22.701232 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:22.701534 master-0 kubenswrapper[7784]: I0223 13:12:22.701251 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:23.700613 master-0 kubenswrapper[7784]: I0223 13:12:23.700542 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:23.700613 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:23.700613 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:23.700613 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:23.701163 master-0 kubenswrapper[7784]: I0223 13:12:23.700634 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:24.409886 master-0 kubenswrapper[7784]: I0223 13:12:24.408482 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ql8mx"] Feb 23 13:12:24.409886 master-0 kubenswrapper[7784]: I0223 13:12:24.409484 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.415430 master-0 kubenswrapper[7784]: I0223 13:12:24.412759 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 23 13:12:24.415430 master-0 kubenswrapper[7784]: I0223 13:12:24.413258 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-wmgj4" Feb 23 13:12:24.511952 master-0 kubenswrapper[7784]: I0223 13:12:24.511881 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.512275 master-0 kubenswrapper[7784]: I0223 13:12:24.512002 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.512275 master-0 kubenswrapper[7784]: I0223 13:12:24.512048 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkxdw\" (UniqueName: \"kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.512275 master-0 kubenswrapper[7784]: I0223 13:12:24.512070 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.613785 master-0 kubenswrapper[7784]: I0223 13:12:24.613701 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.613785 master-0 kubenswrapper[7784]: I0223 13:12:24.613795 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkxdw\" (UniqueName: \"kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.614169 master-0 kubenswrapper[7784]: I0223 13:12:24.613831 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.614169 master-0 kubenswrapper[7784]: I0223 13:12:24.614091 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.614330 master-0 kubenswrapper[7784]: I0223 13:12:24.614226 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.614330 master-0 kubenswrapper[7784]: I0223 13:12:24.614271 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.615163 master-0 kubenswrapper[7784]: I0223 13:12:24.615124 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.641635 master-0 kubenswrapper[7784]: I0223 13:12:24.641562 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkxdw\" (UniqueName: \"kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw\") pod \"cni-sysctl-allowlist-ds-ql8mx\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.699972 master-0 kubenswrapper[7784]: I0223 13:12:24.699801 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:24.699972 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:24.699972 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:24.699972 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:24.699972 master-0 kubenswrapper[7784]: I0223 13:12:24.699887 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:24.740984 master-0 kubenswrapper[7784]: I0223 13:12:24.740902 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:24.772831 master-0 kubenswrapper[7784]: W0223 13:12:24.772759 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a75c85_7f3b_4a73_b4ff_2c63c658ba82.slice/crio-235ef919d862de8f6e00e78c2072c34f2684034d902e9db9f564a5ee81092013 WatchSource:0}: Error finding container 235ef919d862de8f6e00e78c2072c34f2684034d902e9db9f564a5ee81092013: Status 404 returned error can't find the container with id 235ef919d862de8f6e00e78c2072c34f2684034d902e9db9f564a5ee81092013 Feb 23 13:12:24.790713 master-0 kubenswrapper[7784]: I0223 13:12:24.790646 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:12:24.795082 master-0 kubenswrapper[7784]: I0223 13:12:24.795034 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:12:25.235898 master-0 kubenswrapper[7784]: I0223 13:12:25.235692 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" event={"ID":"89a75c85-7f3b-4a73-b4ff-2c63c658ba82","Type":"ContainerStarted","Data":"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db"} Feb 23 13:12:25.235898 master-0 kubenswrapper[7784]: I0223 13:12:25.235853 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" event={"ID":"89a75c85-7f3b-4a73-b4ff-2c63c658ba82","Type":"ContainerStarted","Data":"235ef919d862de8f6e00e78c2072c34f2684034d902e9db9f564a5ee81092013"} Feb 23 13:12:25.236402 master-0 kubenswrapper[7784]: I0223 13:12:25.236334 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:25.260970 master-0 kubenswrapper[7784]: I0223 13:12:25.260889 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" podStartSLOduration=1.260869395 podStartE2EDuration="1.260869395s" podCreationTimestamp="2026-02-23 13:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:25.255460383 +0000 UTC m=+687.990314026" watchObservedRunningTime="2026-02-23 13:12:25.260869395 +0000 UTC m=+687.995723038" Feb 23 13:12:25.699783 master-0 kubenswrapper[7784]: I0223 13:12:25.699733 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:25.699783 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:25.699783 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:25.699783 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:25.700122 master-0 kubenswrapper[7784]: I0223 13:12:25.700099 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:26.259467 master-0 kubenswrapper[7784]: I0223 13:12:26.259415 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:26.699555 master-0 kubenswrapper[7784]: I0223 13:12:26.699443 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:26.699555 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:26.699555 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:26.699555 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:26.699555 master-0 kubenswrapper[7784]: I0223 13:12:26.699516 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:27.700752 master-0 kubenswrapper[7784]: I0223 13:12:27.700692 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:27.700752 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:27.700752 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:27.700752 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:27.701499 master-0 kubenswrapper[7784]: I0223 13:12:27.700778 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:28.041567 master-0 kubenswrapper[7784]: I0223 13:12:28.041503 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ql8mx"] Feb 23 13:12:28.253327 master-0 kubenswrapper[7784]: I0223 13:12:28.253264 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" gracePeriod=30 Feb 23 13:12:28.701171 master-0 kubenswrapper[7784]: I0223 13:12:28.701028 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:28.701171 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:28.701171 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:28.701171 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:28.702165 master-0 kubenswrapper[7784]: I0223 13:12:28.701201 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:29.643445 master-0 kubenswrapper[7784]: I0223 13:12:29.643382 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d"] Feb 23 13:12:29.644431 master-0 kubenswrapper[7784]: I0223 13:12:29.644402 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.647274 master-0 kubenswrapper[7784]: I0223 13:12:29.647221 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rlrxt" Feb 23 13:12:29.678609 master-0 kubenswrapper[7784]: I0223 13:12:29.678476 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d"] Feb 23 13:12:29.700903 master-0 kubenswrapper[7784]: I0223 13:12:29.700824 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:29.700903 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:29.700903 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:29.700903 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:29.701225 master-0 kubenswrapper[7784]: I0223 13:12:29.700906 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:29.814127 master-0 kubenswrapper[7784]: I0223 13:12:29.814023 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.814374 master-0 kubenswrapper[7784]: I0223 13:12:29.814206 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndf8h\" (UniqueName: \"kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.916199 master-0 kubenswrapper[7784]: I0223 13:12:29.916037 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndf8h\" (UniqueName: \"kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.916199 master-0 kubenswrapper[7784]: I0223 13:12:29.916191 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.920608 master-0 kubenswrapper[7784]: I0223 13:12:29.920553 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.946925 master-0 kubenswrapper[7784]: I0223 13:12:29.946871 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndf8h\" (UniqueName: \"kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:29.965136 master-0 kubenswrapper[7784]: I0223 13:12:29.965041 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:12:30.428117 master-0 kubenswrapper[7784]: I0223 13:12:30.427994 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d"] Feb 23 13:12:30.436427 master-0 kubenswrapper[7784]: W0223 13:12:30.436295 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd76d5e5a_3009_42c9_b981_e6ddfa3ba13e.slice/crio-0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3 WatchSource:0}: Error finding container 0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3: Status 404 returned error can't find the container with id 0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3 Feb 23 13:12:30.473378 master-0 kubenswrapper[7784]: I0223 13:12:30.473301 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:30.473615 master-0 kubenswrapper[7784]: I0223 13:12:30.473589 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podUID="b3f51426-3809-4c1d-8628-0aca7873bde2" containerName="installer" containerID="cri-o://ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84" gracePeriod=30 Feb 23 13:12:30.701363 master-0 kubenswrapper[7784]: I0223 13:12:30.700136 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:30.701363 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:30.701363 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:30.701363 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:30.701363 master-0 kubenswrapper[7784]: I0223 13:12:30.700207 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:31.275259 master-0 kubenswrapper[7784]: I0223 13:12:31.275188 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/2.log" Feb 23 13:12:31.275993 master-0 kubenswrapper[7784]: I0223 13:12:31.275952 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/1.log" Feb 23 13:12:31.276457 master-0 kubenswrapper[7784]: I0223 13:12:31.276417 7784 generic.go:334] "Generic (PLEG): container finished" podID="878aa813-a8b9-4a6f-8086-778df276d0d7" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" exitCode=1 Feb 23 13:12:31.276544 master-0 kubenswrapper[7784]: I0223 13:12:31.276508 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerDied","Data":"3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774"} Feb 23 13:12:31.276587 master-0 kubenswrapper[7784]: I0223 13:12:31.276569 7784 scope.go:117] "RemoveContainer" containerID="2e5a5c45572547d68765aa2317c14d26774c109bceb25a699d848d50d57f589e" Feb 23 13:12:31.277097 master-0 kubenswrapper[7784]: I0223 13:12:31.277053 7784 scope.go:117] "RemoveContainer" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" Feb 23 13:12:31.277405 master-0 kubenswrapper[7784]: E0223 13:12:31.277373 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-k9h69_openshift-ingress-operator(878aa813-a8b9-4a6f-8086-778df276d0d7)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" podUID="878aa813-a8b9-4a6f-8086-778df276d0d7" Feb 23 13:12:31.280833 master-0 kubenswrapper[7784]: I0223 13:12:31.280780 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" event={"ID":"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e","Type":"ContainerStarted","Data":"bd98c7e842515558214a3ccfe07fbda46c6bc77f3f2c95df897689dcd06d0a29"} Feb 23 13:12:31.280900 master-0 kubenswrapper[7784]: I0223 13:12:31.280838 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" event={"ID":"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e","Type":"ContainerStarted","Data":"7faba1a901789bc518e1c85ce3c1cdd58fe8565b415067e291336c72ebdad9fa"} Feb 23 13:12:31.280900 master-0 kubenswrapper[7784]: I0223 13:12:31.280854 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" event={"ID":"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e","Type":"ContainerStarted","Data":"0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3"} Feb 23 13:12:31.354372 master-0 kubenswrapper[7784]: I0223 13:12:31.352274 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" podStartSLOduration=2.352255089 podStartE2EDuration="2.352255089s" podCreationTimestamp="2026-02-23 13:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:31.317744077 +0000 UTC m=+694.052597720" watchObservedRunningTime="2026-02-23 13:12:31.352255089 +0000 UTC m=+694.087108732" Feb 23 13:12:31.354372 master-0 kubenswrapper[7784]: I0223 13:12:31.352859 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:12:31.354372 master-0 kubenswrapper[7784]: I0223 13:12:31.353074 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="multus-admission-controller" containerID="cri-o://e605f56ea553ea41b317a4b82ddae8751a5476ad313ba687cfe0354516e82158" gracePeriod=30 Feb 23 13:12:31.354372 master-0 kubenswrapper[7784]: I0223 13:12:31.353129 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="kube-rbac-proxy" containerID="cri-o://1a566738f1d6ae1fbd631fc2a674cd4e994eb7c4949566fcd48562d5dff33cf2" gracePeriod=30 Feb 23 13:12:31.700513 master-0 kubenswrapper[7784]: I0223 13:12:31.700332 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:31.700513 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:31.700513 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:31.700513 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:31.700513 master-0 kubenswrapper[7784]: I0223 13:12:31.700441 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:32.290612 master-0 kubenswrapper[7784]: I0223 13:12:32.290571 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/2.log" Feb 23 13:12:32.293358 master-0 kubenswrapper[7784]: I0223 13:12:32.293277 7784 generic.go:334] "Generic (PLEG): container finished" podID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerID="1a566738f1d6ae1fbd631fc2a674cd4e994eb7c4949566fcd48562d5dff33cf2" exitCode=0 Feb 23 13:12:32.293439 master-0 kubenswrapper[7784]: I0223 13:12:32.293370 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerDied","Data":"1a566738f1d6ae1fbd631fc2a674cd4e994eb7c4949566fcd48562d5dff33cf2"} Feb 23 13:12:32.475357 master-0 kubenswrapper[7784]: I0223 13:12:32.475242 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:12:32.476179 master-0 kubenswrapper[7784]: I0223 13:12:32.476144 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.491420 master-0 kubenswrapper[7784]: I0223 13:12:32.491327 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:12:32.559311 master-0 kubenswrapper[7784]: I0223 13:12:32.559095 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.559693 master-0 kubenswrapper[7784]: I0223 13:12:32.559671 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.559825 master-0 kubenswrapper[7784]: I0223 13:12:32.559806 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.669362 master-0 kubenswrapper[7784]: I0223 13:12:32.661501 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.669362 master-0 kubenswrapper[7784]: I0223 13:12:32.661632 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.669362 master-0 kubenswrapper[7784]: I0223 13:12:32.661688 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.669362 master-0 kubenswrapper[7784]: I0223 13:12:32.661820 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.669362 master-0 kubenswrapper[7784]: I0223 13:12:32.662315 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.695488 master-0 kubenswrapper[7784]: I0223 13:12:32.695437 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access\") pod \"installer-5-master-0\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:32.708744 master-0 kubenswrapper[7784]: I0223 13:12:32.708668 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:32.708744 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:32.708744 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:32.708744 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:32.709101 master-0 kubenswrapper[7784]: I0223 13:12:32.708766 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:32.798406 master-0 kubenswrapper[7784]: I0223 13:12:32.798329 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:12:33.249441 master-0 kubenswrapper[7784]: I0223 13:12:33.249121 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:12:33.256129 master-0 kubenswrapper[7784]: W0223 13:12:33.256048 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podabccfbee_41f4_4557_b953_eb6e719aee31.slice/crio-5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac WatchSource:0}: Error finding container 5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac: Status 404 returned error can't find the container with id 5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac Feb 23 13:12:33.302452 master-0 kubenswrapper[7784]: I0223 13:12:33.302388 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"abccfbee-41f4-4557-b953-eb6e719aee31","Type":"ContainerStarted","Data":"5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac"} Feb 23 13:12:33.700810 master-0 kubenswrapper[7784]: I0223 13:12:33.700726 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:33.700810 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:33.700810 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:33.700810 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:33.700810 master-0 kubenswrapper[7784]: I0223 13:12:33.700811 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:34.314604 master-0 kubenswrapper[7784]: I0223 13:12:34.314542 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"abccfbee-41f4-4557-b953-eb6e719aee31","Type":"ContainerStarted","Data":"d45c58d10778fd4bb86b1fa48d56249170c3cf26b7e64edff21eff2bddff7690"} Feb 23 13:12:34.335742 master-0 kubenswrapper[7784]: I0223 13:12:34.335576 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.335501521 podStartE2EDuration="2.335501521s" podCreationTimestamp="2026-02-23 13:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:34.332838226 +0000 UTC m=+697.067691879" watchObservedRunningTime="2026-02-23 13:12:34.335501521 +0000 UTC m=+697.070355194" Feb 23 13:12:34.702006 master-0 kubenswrapper[7784]: I0223 13:12:34.701828 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:34.702006 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:34.702006 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:34.702006 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:34.702006 master-0 kubenswrapper[7784]: I0223 13:12:34.701927 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:34.744915 master-0 kubenswrapper[7784]: E0223 13:12:34.744807 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:34.746778 master-0 kubenswrapper[7784]: E0223 13:12:34.746723 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:34.748313 master-0 kubenswrapper[7784]: E0223 13:12:34.748255 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:34.748424 master-0 kubenswrapper[7784]: E0223 13:12:34.748311 7784 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:12:35.702938 master-0 kubenswrapper[7784]: I0223 13:12:35.702845 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:35.702938 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:35.702938 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:35.702938 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:35.703693 master-0 kubenswrapper[7784]: I0223 13:12:35.702942 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:36.700616 master-0 kubenswrapper[7784]: I0223 13:12:36.700514 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:36.700616 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:36.700616 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:36.700616 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:36.700936 master-0 kubenswrapper[7784]: I0223 13:12:36.700643 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:37.701485 master-0 kubenswrapper[7784]: I0223 13:12:37.701413 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:37.701485 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:37.701485 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:37.701485 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:37.702035 master-0 kubenswrapper[7784]: I0223 13:12:37.701512 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:38.706887 master-0 kubenswrapper[7784]: I0223 13:12:38.706795 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:38.706887 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:38.706887 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:38.706887 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:38.707541 master-0 kubenswrapper[7784]: I0223 13:12:38.706898 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:39.700740 master-0 kubenswrapper[7784]: I0223 13:12:39.700660 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:39.700740 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:39.700740 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:39.700740 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:39.701069 master-0 kubenswrapper[7784]: I0223 13:12:39.700767 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:40.671634 master-0 kubenswrapper[7784]: I0223 13:12:40.671524 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:12:40.673101 master-0 kubenswrapper[7784]: I0223 13:12:40.673035 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.675796 master-0 kubenswrapper[7784]: I0223 13:12:40.675723 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-lz65v" Feb 23 13:12:40.675979 master-0 kubenswrapper[7784]: I0223 13:12:40.675813 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:12:40.686304 master-0 kubenswrapper[7784]: I0223 13:12:40.686200 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:12:40.701879 master-0 kubenswrapper[7784]: I0223 13:12:40.701798 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:40.701879 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:40.701879 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:40.701879 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:40.702456 master-0 kubenswrapper[7784]: I0223 13:12:40.701919 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:40.780173 master-0 kubenswrapper[7784]: I0223 13:12:40.780079 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.780173 master-0 kubenswrapper[7784]: I0223 13:12:40.780156 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.780712 master-0 kubenswrapper[7784]: I0223 13:12:40.780637 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.882268 master-0 kubenswrapper[7784]: I0223 13:12:40.882151 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.882268 master-0 kubenswrapper[7784]: I0223 13:12:40.882256 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.882775 master-0 kubenswrapper[7784]: I0223 13:12:40.882323 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.882775 master-0 kubenswrapper[7784]: I0223 13:12:40.882363 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.882775 master-0 kubenswrapper[7784]: I0223 13:12:40.882480 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:40.900229 master-0 kubenswrapper[7784]: I0223 13:12:40.900165 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:41.040395 master-0 kubenswrapper[7784]: I0223 13:12:41.040280 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:12:41.535582 master-0 kubenswrapper[7784]: I0223 13:12:41.535316 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:12:41.542735 master-0 kubenswrapper[7784]: W0223 13:12:41.542624 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6ac1ae06_bb6b_448f_b2ab_cf2adc5b3991.slice/crio-1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813 WatchSource:0}: Error finding container 1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813: Status 404 returned error can't find the container with id 1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813 Feb 23 13:12:41.701058 master-0 kubenswrapper[7784]: I0223 13:12:41.700556 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:41.701058 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:41.701058 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:41.701058 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:41.701058 master-0 kubenswrapper[7784]: I0223 13:12:41.700662 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:42.379799 master-0 kubenswrapper[7784]: I0223 13:12:42.379637 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991","Type":"ContainerStarted","Data":"0430bea27e87ced3ed24fee214199f1e9afd86d96157c7f7feb638bd03a355f0"} Feb 23 13:12:42.379799 master-0 kubenswrapper[7784]: I0223 13:12:42.379705 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991","Type":"ContainerStarted","Data":"1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813"} Feb 23 13:12:42.398293 master-0 kubenswrapper[7784]: I0223 13:12:42.398193 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.39816328 podStartE2EDuration="2.39816328s" podCreationTimestamp="2026-02-23 13:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:42.395518575 +0000 UTC m=+705.130372228" watchObservedRunningTime="2026-02-23 13:12:42.39816328 +0000 UTC m=+705.133016923" Feb 23 13:12:42.701505 master-0 kubenswrapper[7784]: I0223 13:12:42.701137 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:42.701505 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:42.701505 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:42.701505 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:42.701505 master-0 kubenswrapper[7784]: I0223 13:12:42.701246 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:43.700572 master-0 kubenswrapper[7784]: I0223 13:12:43.700490 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:43.700572 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:43.700572 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:43.700572 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:43.701177 master-0 kubenswrapper[7784]: I0223 13:12:43.700576 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:44.700798 master-0 kubenswrapper[7784]: I0223 13:12:44.700685 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:44.700798 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:44.700798 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:44.700798 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:44.701815 master-0 kubenswrapper[7784]: I0223 13:12:44.700824 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:44.743070 master-0 kubenswrapper[7784]: E0223 13:12:44.742937 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:44.744706 master-0 kubenswrapper[7784]: E0223 13:12:44.744579 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:44.746016 master-0 kubenswrapper[7784]: E0223 13:12:44.745959 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:44.746126 master-0 kubenswrapper[7784]: E0223 13:12:44.746015 7784 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:12:45.516039 master-0 kubenswrapper[7784]: I0223 13:12:45.515908 7784 scope.go:117] "RemoveContainer" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" Feb 23 13:12:45.516470 master-0 kubenswrapper[7784]: E0223 13:12:45.516326 7784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-k9h69_openshift-ingress-operator(878aa813-a8b9-4a6f-8086-778df276d0d7)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" podUID="878aa813-a8b9-4a6f-8086-778df276d0d7" Feb 23 13:12:45.700947 master-0 kubenswrapper[7784]: I0223 13:12:45.700855 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:45.700947 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:45.700947 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:45.700947 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:45.701704 master-0 kubenswrapper[7784]: I0223 13:12:45.700968 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:46.701654 master-0 kubenswrapper[7784]: I0223 13:12:46.701538 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:46.701654 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:46.701654 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:46.701654 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:46.702585 master-0 kubenswrapper[7784]: I0223 13:12:46.701670 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:47.702700 master-0 kubenswrapper[7784]: I0223 13:12:47.702419 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:47.702700 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:47.702700 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:47.702700 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:47.702700 master-0 kubenswrapper[7784]: I0223 13:12:47.702559 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:47.931620 master-0 kubenswrapper[7784]: I0223 13:12:47.931535 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-retry-1-master-0_b3f51426-3809-4c1d-8628-0aca7873bde2/installer/0.log" Feb 23 13:12:47.931620 master-0 kubenswrapper[7784]: I0223 13:12:47.931634 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:48.104664 master-0 kubenswrapper[7784]: I0223 13:12:48.104589 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir\") pod \"b3f51426-3809-4c1d-8628-0aca7873bde2\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " Feb 23 13:12:48.104664 master-0 kubenswrapper[7784]: I0223 13:12:48.104651 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock\") pod \"b3f51426-3809-4c1d-8628-0aca7873bde2\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " Feb 23 13:12:48.104664 master-0 kubenswrapper[7784]: I0223 13:12:48.104682 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access\") pod \"b3f51426-3809-4c1d-8628-0aca7873bde2\" (UID: \"b3f51426-3809-4c1d-8628-0aca7873bde2\") " Feb 23 13:12:48.105197 master-0 kubenswrapper[7784]: I0223 13:12:48.104741 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3f51426-3809-4c1d-8628-0aca7873bde2" (UID: "b3f51426-3809-4c1d-8628-0aca7873bde2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:48.105197 master-0 kubenswrapper[7784]: I0223 13:12:48.104797 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock" (OuterVolumeSpecName: "var-lock") pod "b3f51426-3809-4c1d-8628-0aca7873bde2" (UID: "b3f51426-3809-4c1d-8628-0aca7873bde2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:48.105569 master-0 kubenswrapper[7784]: I0223 13:12:48.105538 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:48.105569 master-0 kubenswrapper[7784]: I0223 13:12:48.105559 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3f51426-3809-4c1d-8628-0aca7873bde2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:48.110107 master-0 kubenswrapper[7784]: I0223 13:12:48.109400 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3f51426-3809-4c1d-8628-0aca7873bde2" (UID: "b3f51426-3809-4c1d-8628-0aca7873bde2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:48.207681 master-0 kubenswrapper[7784]: I0223 13:12:48.207457 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f51426-3809-4c1d-8628-0aca7873bde2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:48.430787 master-0 kubenswrapper[7784]: I0223 13:12:48.430706 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-retry-1-master-0_b3f51426-3809-4c1d-8628-0aca7873bde2/installer/0.log" Feb 23 13:12:48.430787 master-0 kubenswrapper[7784]: I0223 13:12:48.430778 7784 generic.go:334] "Generic (PLEG): container finished" podID="b3f51426-3809-4c1d-8628-0aca7873bde2" containerID="ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84" exitCode=1 Feb 23 13:12:48.430787 master-0 kubenswrapper[7784]: I0223 13:12:48.430814 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"b3f51426-3809-4c1d-8628-0aca7873bde2","Type":"ContainerDied","Data":"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84"} Feb 23 13:12:48.431562 master-0 kubenswrapper[7784]: I0223 13:12:48.430861 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"b3f51426-3809-4c1d-8628-0aca7873bde2","Type":"ContainerDied","Data":"52140ddac2ecd65b612c1aa8374928a874f32a4955eb244f4f17159dea7aea64"} Feb 23 13:12:48.431562 master-0 kubenswrapper[7784]: I0223 13:12:48.430888 7784 scope.go:117] "RemoveContainer" containerID="ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84" Feb 23 13:12:48.431562 master-0 kubenswrapper[7784]: I0223 13:12:48.430913 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 23 13:12:48.457782 master-0 kubenswrapper[7784]: I0223 13:12:48.457495 7784 scope.go:117] "RemoveContainer" containerID="ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84" Feb 23 13:12:48.459277 master-0 kubenswrapper[7784]: E0223 13:12:48.458168 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84\": container with ID starting with ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84 not found: ID does not exist" containerID="ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84" Feb 23 13:12:48.459277 master-0 kubenswrapper[7784]: I0223 13:12:48.458212 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84"} err="failed to get container status \"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84\": rpc error: code = NotFound desc = could not find container \"ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84\": container with ID starting with ba28ec9dad4f5879707e3fac018bb15bc74f36f7d0f9f9ba7ba583d410833f84 not found: ID does not exist" Feb 23 13:12:48.473396 master-0 kubenswrapper[7784]: I0223 13:12:48.470780 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:48.474824 master-0 kubenswrapper[7784]: I0223 13:12:48.474292 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 23 13:12:48.702395 master-0 kubenswrapper[7784]: I0223 13:12:48.702199 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:48.702395 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:48.702395 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:48.702395 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:48.703776 master-0 kubenswrapper[7784]: I0223 13:12:48.702371 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:49.522331 master-0 kubenswrapper[7784]: I0223 13:12:49.522232 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f51426-3809-4c1d-8628-0aca7873bde2" path="/var/lib/kubelet/pods/b3f51426-3809-4c1d-8628-0aca7873bde2/volumes" Feb 23 13:12:49.701332 master-0 kubenswrapper[7784]: I0223 13:12:49.701220 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:49.701332 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:49.701332 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:49.701332 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:49.701846 master-0 kubenswrapper[7784]: I0223 13:12:49.701370 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:50.700917 master-0 kubenswrapper[7784]: I0223 13:12:50.700820 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:50.700917 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:50.700917 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:50.700917 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:50.700917 master-0 kubenswrapper[7784]: I0223 13:12:50.700901 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:51.702173 master-0 kubenswrapper[7784]: I0223 13:12:51.702049 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:51.702173 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:51.702173 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:51.702173 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:51.702173 master-0 kubenswrapper[7784]: I0223 13:12:51.702170 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:52.701598 master-0 kubenswrapper[7784]: I0223 13:12:52.701490 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:52.701598 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:52.701598 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:52.701598 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:52.701598 master-0 kubenswrapper[7784]: I0223 13:12:52.701599 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:53.701677 master-0 kubenswrapper[7784]: I0223 13:12:53.701590 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:53.701677 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:53.701677 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:53.701677 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:53.702603 master-0 kubenswrapper[7784]: I0223 13:12:53.701691 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:54.701219 master-0 kubenswrapper[7784]: I0223 13:12:54.701122 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:54.701219 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:54.701219 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:54.701219 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:54.701604 master-0 kubenswrapper[7784]: I0223 13:12:54.701239 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:54.745961 master-0 kubenswrapper[7784]: E0223 13:12:54.745819 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:54.748685 master-0 kubenswrapper[7784]: E0223 13:12:54.748607 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:54.750859 master-0 kubenswrapper[7784]: E0223 13:12:54.750788 7784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:12:54.750983 master-0 kubenswrapper[7784]: E0223 13:12:54.750856 7784 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:12:55.701132 master-0 kubenswrapper[7784]: I0223 13:12:55.701005 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:55.701132 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:55.701132 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:55.701132 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:55.701132 master-0 kubenswrapper[7784]: I0223 13:12:55.701128 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:56.701753 master-0 kubenswrapper[7784]: I0223 13:12:56.701643 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:56.701753 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:56.701753 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:56.701753 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:56.702820 master-0 kubenswrapper[7784]: I0223 13:12:56.701758 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:57.701884 master-0 kubenswrapper[7784]: I0223 13:12:57.701818 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:57.701884 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:57.701884 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:57.701884 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:57.702593 master-0 kubenswrapper[7784]: I0223 13:12:57.701909 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:58.395930 master-0 kubenswrapper[7784]: I0223 13:12:58.395793 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ql8mx_89a75c85-7f3b-4a73-b4ff-2c63c658ba82/kube-multus-additional-cni-plugins/0.log" Feb 23 13:12:58.396205 master-0 kubenswrapper[7784]: I0223 13:12:58.396067 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:58.493525 master-0 kubenswrapper[7784]: I0223 13:12:58.493391 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist\") pod \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " Feb 23 13:12:58.494034 master-0 kubenswrapper[7784]: I0223 13:12:58.493672 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir\") pod \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " Feb 23 13:12:58.494034 master-0 kubenswrapper[7784]: I0223 13:12:58.493769 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready\") pod \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " Feb 23 13:12:58.494034 master-0 kubenswrapper[7784]: I0223 13:12:58.493817 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkxdw\" (UniqueName: \"kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw\") pod \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\" (UID: \"89a75c85-7f3b-4a73-b4ff-2c63c658ba82\") " Feb 23 13:12:58.494034 master-0 kubenswrapper[7784]: I0223 13:12:58.493827 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "89a75c85-7f3b-4a73-b4ff-2c63c658ba82" (UID: "89a75c85-7f3b-4a73-b4ff-2c63c658ba82"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:58.494401 master-0 kubenswrapper[7784]: I0223 13:12:58.494287 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "89a75c85-7f3b-4a73-b4ff-2c63c658ba82" (UID: "89a75c85-7f3b-4a73-b4ff-2c63c658ba82"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:12:58.494401 master-0 kubenswrapper[7784]: I0223 13:12:58.494369 7784 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:58.494401 master-0 kubenswrapper[7784]: I0223 13:12:58.494288 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready" (OuterVolumeSpecName: "ready") pod "89a75c85-7f3b-4a73-b4ff-2c63c658ba82" (UID: "89a75c85-7f3b-4a73-b4ff-2c63c658ba82"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:58.499256 master-0 kubenswrapper[7784]: I0223 13:12:58.499192 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw" (OuterVolumeSpecName: "kube-api-access-rkxdw") pod "89a75c85-7f3b-4a73-b4ff-2c63c658ba82" (UID: "89a75c85-7f3b-4a73-b4ff-2c63c658ba82"). InnerVolumeSpecName "kube-api-access-rkxdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:58.547638 master-0 kubenswrapper[7784]: I0223 13:12:58.547544 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ql8mx_89a75c85-7f3b-4a73-b4ff-2c63c658ba82/kube-multus-additional-cni-plugins/0.log" Feb 23 13:12:58.547638 master-0 kubenswrapper[7784]: I0223 13:12:58.547658 7784 generic.go:334] "Generic (PLEG): container finished" podID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" exitCode=137 Feb 23 13:12:58.548029 master-0 kubenswrapper[7784]: I0223 13:12:58.547721 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" event={"ID":"89a75c85-7f3b-4a73-b4ff-2c63c658ba82","Type":"ContainerDied","Data":"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db"} Feb 23 13:12:58.548029 master-0 kubenswrapper[7784]: I0223 13:12:58.547774 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" event={"ID":"89a75c85-7f3b-4a73-b4ff-2c63c658ba82","Type":"ContainerDied","Data":"235ef919d862de8f6e00e78c2072c34f2684034d902e9db9f564a5ee81092013"} Feb 23 13:12:58.548029 master-0 kubenswrapper[7784]: I0223 13:12:58.547810 7784 scope.go:117] "RemoveContainer" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" Feb 23 13:12:58.548029 master-0 kubenswrapper[7784]: I0223 13:12:58.547872 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ql8mx" Feb 23 13:12:58.574162 master-0 kubenswrapper[7784]: I0223 13:12:58.574072 7784 scope.go:117] "RemoveContainer" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" Feb 23 13:12:58.575082 master-0 kubenswrapper[7784]: E0223 13:12:58.574946 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db\": container with ID starting with 51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db not found: ID does not exist" containerID="51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db" Feb 23 13:12:58.575082 master-0 kubenswrapper[7784]: I0223 13:12:58.575043 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db"} err="failed to get container status \"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db\": rpc error: code = NotFound desc = could not find container \"51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db\": container with ID starting with 51115cadf4d871b8479b405de774c1f4980266f4e16bdad71261aa22d8c0a3db not found: ID does not exist" Feb 23 13:12:58.597151 master-0 kubenswrapper[7784]: I0223 13:12:58.597061 7784 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:58.597367 master-0 kubenswrapper[7784]: I0223 13:12:58.597177 7784 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-ready\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:58.597367 master-0 kubenswrapper[7784]: I0223 13:12:58.597205 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkxdw\" (UniqueName: \"kubernetes.io/projected/89a75c85-7f3b-4a73-b4ff-2c63c658ba82-kube-api-access-rkxdw\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:58.603601 master-0 kubenswrapper[7784]: I0223 13:12:58.603495 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ql8mx"] Feb 23 13:12:58.616130 master-0 kubenswrapper[7784]: I0223 13:12:58.616023 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ql8mx"] Feb 23 13:12:58.701830 master-0 kubenswrapper[7784]: I0223 13:12:58.701699 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:58.701830 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:58.701830 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:58.701830 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:58.701830 master-0 kubenswrapper[7784]: I0223 13:12:58.701776 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:12:59.523623 master-0 kubenswrapper[7784]: I0223 13:12:59.523542 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" path="/var/lib/kubelet/pods/89a75c85-7f3b-4a73-b4ff-2c63c658ba82/volumes" Feb 23 13:12:59.702258 master-0 kubenswrapper[7784]: I0223 13:12:59.702124 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:12:59.702258 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:12:59.702258 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:12:59.702258 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:12:59.702258 master-0 kubenswrapper[7784]: I0223 13:12:59.702247 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:00.516577 master-0 kubenswrapper[7784]: I0223 13:13:00.516462 7784 scope.go:117] "RemoveContainer" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" Feb 23 13:13:00.700841 master-0 kubenswrapper[7784]: I0223 13:13:00.700775 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:00.700841 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:00.700841 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:00.700841 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:00.701333 master-0 kubenswrapper[7784]: I0223 13:13:00.700866 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:01.581679 master-0 kubenswrapper[7784]: I0223 13:13:01.581626 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/2.log" Feb 23 13:13:01.582662 master-0 kubenswrapper[7784]: I0223 13:13:01.582284 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"c83498da67d9371893a003fc0cf39cf6626ee5bbdf6f92277274b5695bb058d4"} Feb 23 13:13:01.597522 master-0 kubenswrapper[7784]: I0223 13:13:01.597374 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-rz2zl_1b0122c7-1407-4a35-afcc-2c6b1225e830/multus-admission-controller/0.log" Feb 23 13:13:01.597845 master-0 kubenswrapper[7784]: I0223 13:13:01.597525 7784 generic.go:334] "Generic (PLEG): container finished" podID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerID="e605f56ea553ea41b317a4b82ddae8751a5476ad313ba687cfe0354516e82158" exitCode=137 Feb 23 13:13:01.597845 master-0 kubenswrapper[7784]: I0223 13:13:01.597673 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerDied","Data":"e605f56ea553ea41b317a4b82ddae8751a5476ad313ba687cfe0354516e82158"} Feb 23 13:13:01.701373 master-0 kubenswrapper[7784]: I0223 13:13:01.701261 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:01.701373 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:01.701373 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:01.701373 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:01.701745 master-0 kubenswrapper[7784]: I0223 13:13:01.701377 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:01.741451 master-0 kubenswrapper[7784]: I0223 13:13:01.741397 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-rz2zl_1b0122c7-1407-4a35-afcc-2c6b1225e830/multus-admission-controller/0.log" Feb 23 13:13:01.741747 master-0 kubenswrapper[7784]: I0223 13:13:01.741483 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:13:01.758498 master-0 kubenswrapper[7784]: I0223 13:13:01.758369 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") pod \"1b0122c7-1407-4a35-afcc-2c6b1225e830\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " Feb 23 13:13:01.758760 master-0 kubenswrapper[7784]: I0223 13:13:01.758700 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") pod \"1b0122c7-1407-4a35-afcc-2c6b1225e830\" (UID: \"1b0122c7-1407-4a35-afcc-2c6b1225e830\") " Feb 23 13:13:01.763433 master-0 kubenswrapper[7784]: I0223 13:13:01.763357 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "1b0122c7-1407-4a35-afcc-2c6b1225e830" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:13:01.784825 master-0 kubenswrapper[7784]: I0223 13:13:01.779803 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s" (OuterVolumeSpecName: "kube-api-access-cw97s") pod "1b0122c7-1407-4a35-afcc-2c6b1225e830" (UID: "1b0122c7-1407-4a35-afcc-2c6b1225e830"). InnerVolumeSpecName "kube-api-access-cw97s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:13:01.861181 master-0 kubenswrapper[7784]: I0223 13:13:01.861107 7784 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b0122c7-1407-4a35-afcc-2c6b1225e830-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:01.861181 master-0 kubenswrapper[7784]: I0223 13:13:01.861158 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw97s\" (UniqueName: \"kubernetes.io/projected/1b0122c7-1407-4a35-afcc-2c6b1225e830-kube-api-access-cw97s\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:02.610812 master-0 kubenswrapper[7784]: I0223 13:13:02.610730 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-rz2zl_1b0122c7-1407-4a35-afcc-2c6b1225e830/multus-admission-controller/0.log" Feb 23 13:13:02.611588 master-0 kubenswrapper[7784]: I0223 13:13:02.610842 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" event={"ID":"1b0122c7-1407-4a35-afcc-2c6b1225e830","Type":"ContainerDied","Data":"f7761301fa084a7c8ad92e580706956001fc2e87ea644c3846fc5f707957b8a8"} Feb 23 13:13:02.611588 master-0 kubenswrapper[7784]: I0223 13:13:02.610901 7784 scope.go:117] "RemoveContainer" containerID="1a566738f1d6ae1fbd631fc2a674cd4e994eb7c4949566fcd48562d5dff33cf2" Feb 23 13:13:02.612906 master-0 kubenswrapper[7784]: I0223 13:13:02.612836 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl" Feb 23 13:13:02.639331 master-0 kubenswrapper[7784]: I0223 13:13:02.639266 7784 scope.go:117] "RemoveContainer" containerID="e605f56ea553ea41b317a4b82ddae8751a5476ad313ba687cfe0354516e82158" Feb 23 13:13:02.672714 master-0 kubenswrapper[7784]: I0223 13:13:02.672578 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:13:02.677713 master-0 kubenswrapper[7784]: I0223 13:13:02.677628 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-rz2zl"] Feb 23 13:13:02.701993 master-0 kubenswrapper[7784]: I0223 13:13:02.701874 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:02.701993 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:02.701993 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:02.701993 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:02.702161 master-0 kubenswrapper[7784]: I0223 13:13:02.702038 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:03.527743 master-0 kubenswrapper[7784]: I0223 13:13:03.527612 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" path="/var/lib/kubelet/pods/1b0122c7-1407-4a35-afcc-2c6b1225e830/volumes" Feb 23 13:13:03.700819 master-0 kubenswrapper[7784]: I0223 13:13:03.700686 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:03.700819 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:03.700819 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:03.700819 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:03.700819 master-0 kubenswrapper[7784]: I0223 13:13:03.700817 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:04.700977 master-0 kubenswrapper[7784]: I0223 13:13:04.700880 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:04.700977 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:04.700977 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:04.700977 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:04.700977 master-0 kubenswrapper[7784]: I0223 13:13:04.700974 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:05.700507 master-0 kubenswrapper[7784]: I0223 13:13:05.700424 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:05.700507 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:05.700507 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:05.700507 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:05.700507 master-0 kubenswrapper[7784]: I0223 13:13:05.700504 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:06.701162 master-0 kubenswrapper[7784]: I0223 13:13:06.701073 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:06.701162 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:06.701162 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:06.701162 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:06.702030 master-0 kubenswrapper[7784]: I0223 13:13:06.701172 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:07.702014 master-0 kubenswrapper[7784]: I0223 13:13:07.701915 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:07.702014 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:07.702014 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:07.702014 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:07.702951 master-0 kubenswrapper[7784]: I0223 13:13:07.702024 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:08.701166 master-0 kubenswrapper[7784]: I0223 13:13:08.701055 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:08.701166 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:08.701166 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:08.701166 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:08.701715 master-0 kubenswrapper[7784]: I0223 13:13:08.701186 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:09.700930 master-0 kubenswrapper[7784]: I0223 13:13:09.700857 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:09.700930 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:09.700930 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:09.700930 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:09.700930 master-0 kubenswrapper[7784]: I0223 13:13:09.700930 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:10.699328 master-0 kubenswrapper[7784]: I0223 13:13:10.699258 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:10.699328 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:10.699328 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:10.699328 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:10.699328 master-0 kubenswrapper[7784]: I0223 13:13:10.699318 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:11.701094 master-0 kubenswrapper[7784]: I0223 13:13:11.701008 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:11.701094 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:11.701094 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:11.701094 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:11.701784 master-0 kubenswrapper[7784]: I0223 13:13:11.701125 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:12.700276 master-0 kubenswrapper[7784]: I0223 13:13:12.700188 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:12.700276 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:12.700276 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:12.700276 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:12.700630 master-0 kubenswrapper[7784]: I0223 13:13:12.700275 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:13.700238 master-0 kubenswrapper[7784]: I0223 13:13:13.700163 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:13.700238 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:13.700238 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:13.700238 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:13.701305 master-0 kubenswrapper[7784]: I0223 13:13:13.701255 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:14.610996 master-0 kubenswrapper[7784]: I0223 13:13:14.610944 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:13:14.611303 master-0 kubenswrapper[7784]: I0223 13:13:14.611252 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager" containerID="cri-o://cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" gracePeriod=30 Feb 23 13:13:14.611406 master-0 kubenswrapper[7784]: I0223 13:13:14.611284 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" gracePeriod=30 Feb 23 13:13:14.611406 master-0 kubenswrapper[7784]: I0223 13:13:14.611361 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" gracePeriod=30 Feb 23 13:13:14.611490 master-0 kubenswrapper[7784]: I0223 13:13:14.611382 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="0b2cc5255d43139b8419449b8700857e" containerName="cluster-policy-controller" containerID="cri-o://793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" gracePeriod=30 Feb 23 13:13:14.615044 master-0 kubenswrapper[7784]: I0223 13:13:14.615008 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:13:14.616009 master-0 kubenswrapper[7784]: E0223 13:13:14.615988 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager" Feb 23 13:13:14.616119 master-0 kubenswrapper[7784]: I0223 13:13:14.616104 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager" Feb 23 13:13:14.616217 master-0 kubenswrapper[7784]: E0223 13:13:14.616199 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f51426-3809-4c1d-8628-0aca7873bde2" containerName="installer" Feb 23 13:13:14.616379 master-0 kubenswrapper[7784]: I0223 13:13:14.616358 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f51426-3809-4c1d-8628-0aca7873bde2" containerName="installer" Feb 23 13:13:14.616537 master-0 kubenswrapper[7784]: E0223 13:13:14.616518 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="kube-rbac-proxy" Feb 23 13:13:14.616648 master-0 kubenswrapper[7784]: I0223 13:13:14.616631 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="kube-rbac-proxy" Feb 23 13:13:14.616764 master-0 kubenswrapper[7784]: E0223 13:13:14.616745 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-cert-syncer" Feb 23 13:13:14.616887 master-0 kubenswrapper[7784]: I0223 13:13:14.616864 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-cert-syncer" Feb 23 13:13:14.617017 master-0 kubenswrapper[7784]: E0223 13:13:14.616998 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-recovery-controller" Feb 23 13:13:14.617128 master-0 kubenswrapper[7784]: I0223 13:13:14.617110 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-recovery-controller" Feb 23 13:13:14.617242 master-0 kubenswrapper[7784]: E0223 13:13:14.617223 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:13:14.617440 master-0 kubenswrapper[7784]: I0223 13:13:14.617419 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:13:14.617574 master-0 kubenswrapper[7784]: E0223 13:13:14.617554 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2cc5255d43139b8419449b8700857e" containerName="cluster-policy-controller" Feb 23 13:13:14.617683 master-0 kubenswrapper[7784]: I0223 13:13:14.617665 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2cc5255d43139b8419449b8700857e" containerName="cluster-policy-controller" Feb 23 13:13:14.617806 master-0 kubenswrapper[7784]: E0223 13:13:14.617785 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="multus-admission-controller" Feb 23 13:13:14.617920 master-0 kubenswrapper[7784]: I0223 13:13:14.617901 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="multus-admission-controller" Feb 23 13:13:14.618243 master-0 kubenswrapper[7784]: I0223 13:13:14.618219 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-cert-syncer" Feb 23 13:13:14.618391 master-0 kubenswrapper[7784]: I0223 13:13:14.618370 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a75c85-7f3b-4a73-b4ff-2c63c658ba82" containerName="kube-multus-additional-cni-plugins" Feb 23 13:13:14.618526 master-0 kubenswrapper[7784]: I0223 13:13:14.618507 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="multus-admission-controller" Feb 23 13:13:14.618639 master-0 kubenswrapper[7784]: I0223 13:13:14.618621 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2cc5255d43139b8419449b8700857e" containerName="cluster-policy-controller" Feb 23 13:13:14.618754 master-0 kubenswrapper[7784]: I0223 13:13:14.618735 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager" Feb 23 13:13:14.618886 master-0 kubenswrapper[7784]: I0223 13:13:14.618865 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f51426-3809-4c1d-8628-0aca7873bde2" containerName="installer" Feb 23 13:13:14.619002 master-0 kubenswrapper[7784]: I0223 13:13:14.618982 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2cc5255d43139b8419449b8700857e" containerName="kube-controller-manager-recovery-controller" Feb 23 13:13:14.619122 master-0 kubenswrapper[7784]: I0223 13:13:14.619103 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0122c7-1407-4a35-afcc-2c6b1225e830" containerName="kube-rbac-proxy" Feb 23 13:13:14.649163 master-0 kubenswrapper[7784]: I0223 13:13:14.649095 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.649424 master-0 kubenswrapper[7784]: I0223 13:13:14.649187 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.700030 master-0 kubenswrapper[7784]: I0223 13:13:14.699693 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:14.700030 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:14.700030 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:14.700030 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:14.700030 master-0 kubenswrapper[7784]: I0223 13:13:14.699781 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:14.750235 master-0 kubenswrapper[7784]: I0223 13:13:14.750179 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.750683 master-0 kubenswrapper[7784]: I0223 13:13:14.750330 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.750683 master-0 kubenswrapper[7784]: I0223 13:13:14.750657 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.750853 master-0 kubenswrapper[7784]: I0223 13:13:14.750835 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.828681 master-0 kubenswrapper[7784]: I0223 13:13:14.828635 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_0b2cc5255d43139b8419449b8700857e/kube-controller-manager-cert-syncer/0.log" Feb 23 13:13:14.829440 master-0 kubenswrapper[7784]: I0223 13:13:14.829414 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:14.839229 master-0 kubenswrapper[7784]: I0223 13:13:14.839127 7784 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="0b2cc5255d43139b8419449b8700857e" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" Feb 23 13:13:14.952737 master-0 kubenswrapper[7784]: I0223 13:13:14.952674 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir\") pod \"0b2cc5255d43139b8419449b8700857e\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " Feb 23 13:13:14.952737 master-0 kubenswrapper[7784]: I0223 13:13:14.952743 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir\") pod \"0b2cc5255d43139b8419449b8700857e\" (UID: \"0b2cc5255d43139b8419449b8700857e\") " Feb 23 13:13:14.952967 master-0 kubenswrapper[7784]: I0223 13:13:14.952853 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "0b2cc5255d43139b8419449b8700857e" (UID: "0b2cc5255d43139b8419449b8700857e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:14.953036 master-0 kubenswrapper[7784]: I0223 13:13:14.952970 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "0b2cc5255d43139b8419449b8700857e" (UID: "0b2cc5255d43139b8419449b8700857e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:14.953265 master-0 kubenswrapper[7784]: I0223 13:13:14.953233 7784 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:14.953265 master-0 kubenswrapper[7784]: I0223 13:13:14.953262 7784 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b2cc5255d43139b8419449b8700857e-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:15.528757 master-0 kubenswrapper[7784]: I0223 13:13:15.528670 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2cc5255d43139b8419449b8700857e" path="/var/lib/kubelet/pods/0b2cc5255d43139b8419449b8700857e/volumes" Feb 23 13:13:15.704232 master-0 kubenswrapper[7784]: I0223 13:13:15.704156 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:15.704232 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:15.704232 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:15.704232 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:15.704232 master-0 kubenswrapper[7784]: I0223 13:13:15.704215 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:15.709535 master-0 kubenswrapper[7784]: I0223 13:13:15.709449 7784 generic.go:334] "Generic (PLEG): container finished" podID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerID="0430bea27e87ced3ed24fee214199f1e9afd86d96157c7f7feb638bd03a355f0" exitCode=0 Feb 23 13:13:15.709717 master-0 kubenswrapper[7784]: I0223 13:13:15.709653 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991","Type":"ContainerDied","Data":"0430bea27e87ced3ed24fee214199f1e9afd86d96157c7f7feb638bd03a355f0"} Feb 23 13:13:15.712544 master-0 kubenswrapper[7784]: I0223 13:13:15.712504 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_0b2cc5255d43139b8419449b8700857e/kube-controller-manager-cert-syncer/0.log" Feb 23 13:13:15.713452 master-0 kubenswrapper[7784]: I0223 13:13:15.713399 7784 generic.go:334] "Generic (PLEG): container finished" podID="0b2cc5255d43139b8419449b8700857e" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" exitCode=0 Feb 23 13:13:15.713452 master-0 kubenswrapper[7784]: I0223 13:13:15.713442 7784 generic.go:334] "Generic (PLEG): container finished" podID="0b2cc5255d43139b8419449b8700857e" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" exitCode=2 Feb 23 13:13:15.713452 master-0 kubenswrapper[7784]: I0223 13:13:15.713451 7784 generic.go:334] "Generic (PLEG): container finished" podID="0b2cc5255d43139b8419449b8700857e" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" exitCode=0 Feb 23 13:13:15.713452 master-0 kubenswrapper[7784]: I0223 13:13:15.713459 7784 generic.go:334] "Generic (PLEG): container finished" podID="0b2cc5255d43139b8419449b8700857e" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" exitCode=0 Feb 23 13:13:15.713771 master-0 kubenswrapper[7784]: I0223 13:13:15.713515 7784 scope.go:117] "RemoveContainer" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.713771 master-0 kubenswrapper[7784]: I0223 13:13:15.713609 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:15.732613 master-0 kubenswrapper[7784]: I0223 13:13:15.732524 7784 scope.go:117] "RemoveContainer" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.735786 master-0 kubenswrapper[7784]: I0223 13:13:15.735733 7784 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="0b2cc5255d43139b8419449b8700857e" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" Feb 23 13:13:15.758959 master-0 kubenswrapper[7784]: I0223 13:13:15.758892 7784 scope.go:117] "RemoveContainer" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.780302 master-0 kubenswrapper[7784]: I0223 13:13:15.780241 7784 scope.go:117] "RemoveContainer" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: I0223 13:13:15.804432 7784 scope.go:117] "RemoveContainer" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: E0223 13:13:15.805208 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": container with ID starting with f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94 not found: ID does not exist" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: I0223 13:13:15.805263 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94"} err="failed to get container status \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": rpc error: code = NotFound desc = could not find container \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": container with ID starting with f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94 not found: ID does not exist" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: I0223 13:13:15.805292 7784 scope.go:117] "RemoveContainer" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: E0223 13:13:15.806199 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": container with ID starting with 547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c not found: ID does not exist" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: I0223 13:13:15.806227 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c"} err="failed to get container status \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": rpc error: code = NotFound desc = could not find container \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": container with ID starting with 547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c not found: ID does not exist" Feb 23 13:13:15.806639 master-0 kubenswrapper[7784]: I0223 13:13:15.806246 7784 scope.go:117] "RemoveContainer" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.807180 master-0 kubenswrapper[7784]: E0223 13:13:15.807146 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": container with ID starting with 793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1 not found: ID does not exist" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.807180 master-0 kubenswrapper[7784]: I0223 13:13:15.807171 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1"} err="failed to get container status \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": rpc error: code = NotFound desc = could not find container \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": container with ID starting with 793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1 not found: ID does not exist" Feb 23 13:13:15.807328 master-0 kubenswrapper[7784]: I0223 13:13:15.807194 7784 scope.go:117] "RemoveContainer" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.808098 master-0 kubenswrapper[7784]: E0223 13:13:15.808020 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": container with ID starting with cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb not found: ID does not exist" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.808178 master-0 kubenswrapper[7784]: I0223 13:13:15.808089 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb"} err="failed to get container status \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": rpc error: code = NotFound desc = could not find container \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": container with ID starting with cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb not found: ID does not exist" Feb 23 13:13:15.808178 master-0 kubenswrapper[7784]: I0223 13:13:15.808126 7784 scope.go:117] "RemoveContainer" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.808906 master-0 kubenswrapper[7784]: I0223 13:13:15.808823 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94"} err="failed to get container status \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": rpc error: code = NotFound desc = could not find container \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": container with ID starting with f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94 not found: ID does not exist" Feb 23 13:13:15.808906 master-0 kubenswrapper[7784]: I0223 13:13:15.808900 7784 scope.go:117] "RemoveContainer" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.809553 master-0 kubenswrapper[7784]: I0223 13:13:15.809503 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c"} err="failed to get container status \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": rpc error: code = NotFound desc = could not find container \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": container with ID starting with 547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c not found: ID does not exist" Feb 23 13:13:15.809553 master-0 kubenswrapper[7784]: I0223 13:13:15.809535 7784 scope.go:117] "RemoveContainer" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.811176 master-0 kubenswrapper[7784]: I0223 13:13:15.811111 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1"} err="failed to get container status \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": rpc error: code = NotFound desc = could not find container \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": container with ID starting with 793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1 not found: ID does not exist" Feb 23 13:13:15.811276 master-0 kubenswrapper[7784]: I0223 13:13:15.811209 7784 scope.go:117] "RemoveContainer" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.812137 master-0 kubenswrapper[7784]: I0223 13:13:15.812075 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb"} err="failed to get container status \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": rpc error: code = NotFound desc = could not find container \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": container with ID starting with cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb not found: ID does not exist" Feb 23 13:13:15.812137 master-0 kubenswrapper[7784]: I0223 13:13:15.812122 7784 scope.go:117] "RemoveContainer" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.812598 master-0 kubenswrapper[7784]: I0223 13:13:15.812537 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94"} err="failed to get container status \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": rpc error: code = NotFound desc = could not find container \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": container with ID starting with f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94 not found: ID does not exist" Feb 23 13:13:15.812598 master-0 kubenswrapper[7784]: I0223 13:13:15.812583 7784 scope.go:117] "RemoveContainer" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.812927 master-0 kubenswrapper[7784]: I0223 13:13:15.812869 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c"} err="failed to get container status \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": rpc error: code = NotFound desc = could not find container \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": container with ID starting with 547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c not found: ID does not exist" Feb 23 13:13:15.812927 master-0 kubenswrapper[7784]: I0223 13:13:15.812910 7784 scope.go:117] "RemoveContainer" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.813718 master-0 kubenswrapper[7784]: I0223 13:13:15.813671 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1"} err="failed to get container status \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": rpc error: code = NotFound desc = could not find container \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": container with ID starting with 793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1 not found: ID does not exist" Feb 23 13:13:15.813718 master-0 kubenswrapper[7784]: I0223 13:13:15.813700 7784 scope.go:117] "RemoveContainer" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.814202 master-0 kubenswrapper[7784]: I0223 13:13:15.814149 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb"} err="failed to get container status \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": rpc error: code = NotFound desc = could not find container \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": container with ID starting with cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb not found: ID does not exist" Feb 23 13:13:15.814202 master-0 kubenswrapper[7784]: I0223 13:13:15.814183 7784 scope.go:117] "RemoveContainer" containerID="f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94" Feb 23 13:13:15.814589 master-0 kubenswrapper[7784]: I0223 13:13:15.814542 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94"} err="failed to get container status \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": rpc error: code = NotFound desc = could not find container \"f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94\": container with ID starting with f497bbc5638402e599f36cbecade12aa3352df01d47566ad10130e289967ab94 not found: ID does not exist" Feb 23 13:13:15.814589 master-0 kubenswrapper[7784]: I0223 13:13:15.814570 7784 scope.go:117] "RemoveContainer" containerID="547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c" Feb 23 13:13:15.815055 master-0 kubenswrapper[7784]: I0223 13:13:15.814981 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c"} err="failed to get container status \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": rpc error: code = NotFound desc = could not find container \"547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c\": container with ID starting with 547688f226737e465c467421a8f08590d1c58838e255fece4968705c15e43a7c not found: ID does not exist" Feb 23 13:13:15.815055 master-0 kubenswrapper[7784]: I0223 13:13:15.815044 7784 scope.go:117] "RemoveContainer" containerID="793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1" Feb 23 13:13:15.815558 master-0 kubenswrapper[7784]: I0223 13:13:15.815522 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1"} err="failed to get container status \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": rpc error: code = NotFound desc = could not find container \"793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1\": container with ID starting with 793330b18c5001ddb7fa092b3466224f7c2f2e6bd00a702063ab518d84c8d3c1 not found: ID does not exist" Feb 23 13:13:15.815558 master-0 kubenswrapper[7784]: I0223 13:13:15.815550 7784 scope.go:117] "RemoveContainer" containerID="cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb" Feb 23 13:13:15.816897 master-0 kubenswrapper[7784]: I0223 13:13:15.816703 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb"} err="failed to get container status \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": rpc error: code = NotFound desc = could not find container \"cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb\": container with ID starting with cd1a622a7fb4e003854dfeaa5e9afd984753c0856fbe73535c48dc988ef27bfb not found: ID does not exist" Feb 23 13:13:16.701372 master-0 kubenswrapper[7784]: I0223 13:13:16.701272 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:16.701372 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:16.701372 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:16.701372 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:16.701793 master-0 kubenswrapper[7784]: I0223 13:13:16.701381 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:17.055873 master-0 kubenswrapper[7784]: I0223 13:13:17.055787 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:13:17.090763 master-0 kubenswrapper[7784]: I0223 13:13:17.090689 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access\") pod \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " Feb 23 13:13:17.090914 master-0 kubenswrapper[7784]: I0223 13:13:17.090845 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir\") pod \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " Feb 23 13:13:17.091041 master-0 kubenswrapper[7784]: I0223 13:13:17.090979 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" (UID: "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:17.091041 master-0 kubenswrapper[7784]: I0223 13:13:17.091025 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock\") pod \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\" (UID: \"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991\") " Feb 23 13:13:17.091149 master-0 kubenswrapper[7784]: I0223 13:13:17.091045 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock" (OuterVolumeSpecName: "var-lock") pod "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" (UID: "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:17.095707 master-0 kubenswrapper[7784]: I0223 13:13:17.095627 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:17.095792 master-0 kubenswrapper[7784]: I0223 13:13:17.095709 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:17.097122 master-0 kubenswrapper[7784]: I0223 13:13:17.097069 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" (UID: "6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:13:17.196767 master-0 kubenswrapper[7784]: I0223 13:13:17.196674 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:17.700921 master-0 kubenswrapper[7784]: I0223 13:13:17.700849 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:17.700921 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:17.700921 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:17.700921 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:17.701499 master-0 kubenswrapper[7784]: I0223 13:13:17.701457 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:17.731075 master-0 kubenswrapper[7784]: I0223 13:13:17.730991 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991","Type":"ContainerDied","Data":"1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813"} Feb 23 13:13:17.731075 master-0 kubenswrapper[7784]: I0223 13:13:17.731046 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813" Feb 23 13:13:17.731666 master-0 kubenswrapper[7784]: I0223 13:13:17.731584 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:13:18.649475 master-0 kubenswrapper[7784]: I0223 13:13:18.648668 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:18.649475 master-0 kubenswrapper[7784]: E0223 13:13:18.648964 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:13:18.649475 master-0 kubenswrapper[7784]: I0223 13:13:18.648981 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:13:18.649475 master-0 kubenswrapper[7784]: I0223 13:13:18.649137 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:13:18.650128 master-0 kubenswrapper[7784]: I0223 13:13:18.649638 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.652147 master-0 kubenswrapper[7784]: I0223 13:13:18.652086 7784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:13:18.652573 master-0 kubenswrapper[7784]: I0223 13:13:18.652514 7784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-hj5xm" Feb 23 13:13:18.668053 master-0 kubenswrapper[7784]: I0223 13:13:18.667988 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:18.700202 master-0 kubenswrapper[7784]: I0223 13:13:18.700139 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:18.700202 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:18.700202 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:18.700202 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:18.700474 master-0 kubenswrapper[7784]: I0223 13:13:18.700214 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:18.718292 master-0 kubenswrapper[7784]: I0223 13:13:18.718223 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.718401 master-0 kubenswrapper[7784]: I0223 13:13:18.718305 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.718401 master-0 kubenswrapper[7784]: I0223 13:13:18.718372 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.820006 master-0 kubenswrapper[7784]: I0223 13:13:18.819884 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.820458 master-0 kubenswrapper[7784]: I0223 13:13:18.820048 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.820458 master-0 kubenswrapper[7784]: I0223 13:13:18.820079 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.820458 master-0 kubenswrapper[7784]: I0223 13:13:18.820187 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.820458 master-0 kubenswrapper[7784]: I0223 13:13:18.820221 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.839993 master-0 kubenswrapper[7784]: I0223 13:13:18.839871 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:18.971324 master-0 kubenswrapper[7784]: I0223 13:13:18.971102 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:19.427506 master-0 kubenswrapper[7784]: I0223 13:13:19.426149 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:19.701092 master-0 kubenswrapper[7784]: I0223 13:13:19.700868 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:19.701092 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:19.701092 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:19.701092 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:19.701092 master-0 kubenswrapper[7784]: I0223 13:13:19.700945 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:19.745805 master-0 kubenswrapper[7784]: I0223 13:13:19.745711 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"045ca7a8-edf0-4476-a195-f100aaf403cc","Type":"ContainerStarted","Data":"5e3bd25dbe6559d87310d6a6eb1aaa87a045034f74e111f053c8eaa27e2f380e"} Feb 23 13:13:20.701328 master-0 kubenswrapper[7784]: I0223 13:13:20.701201 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:20.701328 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:20.701328 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:20.701328 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:20.702269 master-0 kubenswrapper[7784]: I0223 13:13:20.701323 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:20.761702 master-0 kubenswrapper[7784]: I0223 13:13:20.761555 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"045ca7a8-edf0-4476-a195-f100aaf403cc","Type":"ContainerStarted","Data":"c2a3282a0425d31fa975719eb30b221dd8f485e7e9f360fd96b2586582e8a439"} Feb 23 13:13:20.798801 master-0 kubenswrapper[7784]: I0223 13:13:20.798610 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.798582668 podStartE2EDuration="2.798582668s" podCreationTimestamp="2026-02-23 13:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:20.789066225 +0000 UTC m=+743.523919918" watchObservedRunningTime="2026-02-23 13:13:20.798582668 +0000 UTC m=+743.533436341" Feb 23 13:13:21.701380 master-0 kubenswrapper[7784]: I0223 13:13:21.701267 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:21.701380 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:21.701380 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:21.701380 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:21.702383 master-0 kubenswrapper[7784]: I0223 13:13:21.701391 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:22.701920 master-0 kubenswrapper[7784]: I0223 13:13:22.701784 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:22.701920 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:22.701920 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:22.701920 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:22.703014 master-0 kubenswrapper[7784]: I0223 13:13:22.701947 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:23.701484 master-0 kubenswrapper[7784]: I0223 13:13:23.701403 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:23.701484 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:23.701484 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:23.701484 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:23.701484 master-0 kubenswrapper[7784]: I0223 13:13:23.701490 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:24.700884 master-0 kubenswrapper[7784]: I0223 13:13:24.700797 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:24.700884 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:24.700884 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:24.700884 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:24.701305 master-0 kubenswrapper[7784]: I0223 13:13:24.700897 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:24.712587 master-0 kubenswrapper[7784]: I0223 13:13:24.712535 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:13:24.713139 master-0 kubenswrapper[7784]: I0223 13:13:24.712788 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb" gracePeriod=30 Feb 23 13:13:24.714301 master-0 kubenswrapper[7784]: I0223 13:13:24.714258 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 23 13:13:24.714610 master-0 kubenswrapper[7784]: E0223 13:13:24.714568 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714610 master-0 kubenswrapper[7784]: I0223 13:13:24.714594 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714610 master-0 kubenswrapper[7784]: E0223 13:13:24.714613 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714787 master-0 kubenswrapper[7784]: I0223 13:13:24.714620 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714839 master-0 kubenswrapper[7784]: I0223 13:13:24.714800 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714839 master-0 kubenswrapper[7784]: I0223 13:13:24.714816 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714839 master-0 kubenswrapper[7784]: I0223 13:13:24.714824 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714960 master-0 kubenswrapper[7784]: E0223 13:13:24.714932 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.714960 master-0 kubenswrapper[7784]: I0223 13:13:24.714941 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:13:24.716080 master-0 kubenswrapper[7784]: I0223 13:13:24.716047 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.752755 master-0 kubenswrapper[7784]: I0223 13:13:24.752701 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 23 13:13:24.835498 master-0 kubenswrapper[7784]: I0223 13:13:24.835448 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.837374 master-0 kubenswrapper[7784]: I0223 13:13:24.835888 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.897734 master-0 kubenswrapper[7784]: I0223 13:13:24.897686 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:13:24.936979 master-0 kubenswrapper[7784]: I0223 13:13:24.936901 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.937162 master-0 kubenswrapper[7784]: I0223 13:13:24.936991 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.937162 master-0 kubenswrapper[7784]: I0223 13:13:24.937038 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.937162 master-0 kubenswrapper[7784]: I0223 13:13:24.937106 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:24.941615 master-0 kubenswrapper[7784]: I0223 13:13:24.941565 7784 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="879cee34-6e15-42da-8a90-098398420239" Feb 23 13:13:25.038070 master-0 kubenswrapper[7784]: I0223 13:13:25.038007 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 23 13:13:25.038070 master-0 kubenswrapper[7784]: I0223 13:13:25.038065 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 23 13:13:25.047640 master-0 kubenswrapper[7784]: I0223 13:13:25.042527 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:25.047640 master-0 kubenswrapper[7784]: I0223 13:13:25.042564 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:25.051109 master-0 kubenswrapper[7784]: I0223 13:13:25.050754 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:25.051109 master-0 kubenswrapper[7784]: I0223 13:13:25.050931 7784 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:25.051109 master-0 kubenswrapper[7784]: I0223 13:13:25.050987 7784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:25.256550 master-0 kubenswrapper[7784]: I0223 13:13:25.250365 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:25.256550 master-0 kubenswrapper[7784]: I0223 13:13:25.250649 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="045ca7a8-edf0-4476-a195-f100aaf403cc" containerName="installer" containerID="cri-o://c2a3282a0425d31fa975719eb30b221dd8f485e7e9f360fd96b2586582e8a439" gracePeriod=30 Feb 23 13:13:25.524176 master-0 kubenswrapper[7784]: I0223 13:13:25.524119 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 23 13:13:25.524467 master-0 kubenswrapper[7784]: I0223 13:13:25.524446 7784 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 23 13:13:25.539496 master-0 kubenswrapper[7784]: I0223 13:13:25.539426 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:13:25.539496 master-0 kubenswrapper[7784]: I0223 13:13:25.539489 7784 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="879cee34-6e15-42da-8a90-098398420239" Feb 23 13:13:25.542856 master-0 kubenswrapper[7784]: I0223 13:13:25.542773 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:13:25.542856 master-0 kubenswrapper[7784]: I0223 13:13:25.542820 7784 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="879cee34-6e15-42da-8a90-098398420239" Feb 23 13:13:25.701182 master-0 kubenswrapper[7784]: I0223 13:13:25.701075 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:25.701182 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:25.701182 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:25.701182 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:25.701578 master-0 kubenswrapper[7784]: I0223 13:13:25.701190 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:25.803070 master-0 kubenswrapper[7784]: I0223 13:13:25.802871 7784 generic.go:334] "Generic (PLEG): container finished" podID="abccfbee-41f4-4557-b953-eb6e719aee31" containerID="d45c58d10778fd4bb86b1fa48d56249170c3cf26b7e64edff21eff2bddff7690" exitCode=0 Feb 23 13:13:25.803070 master-0 kubenswrapper[7784]: I0223 13:13:25.803048 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"abccfbee-41f4-4557-b953-eb6e719aee31","Type":"ContainerDied","Data":"d45c58d10778fd4bb86b1fa48d56249170c3cf26b7e64edff21eff2bddff7690"} Feb 23 13:13:25.806525 master-0 kubenswrapper[7784]: I0223 13:13:25.806452 7784 generic.go:334] "Generic (PLEG): container finished" podID="d03a1e6620a92c780b0a91c72a55bc8b" containerID="d17de98c558298cd0c0ce6c4975f377e4c15754cbdbf335c523539dbef081684" exitCode=0 Feb 23 13:13:25.806669 master-0 kubenswrapper[7784]: I0223 13:13:25.806561 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerDied","Data":"d17de98c558298cd0c0ce6c4975f377e4c15754cbdbf335c523539dbef081684"} Feb 23 13:13:25.806669 master-0 kubenswrapper[7784]: I0223 13:13:25.806599 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"968a9822c87f73e9559c28309a177baff6729af2cf700098ba1888ec0387b7bc"} Feb 23 13:13:25.812235 master-0 kubenswrapper[7784]: I0223 13:13:25.812123 7784 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb" exitCode=0 Feb 23 13:13:25.812433 master-0 kubenswrapper[7784]: I0223 13:13:25.812247 7784 scope.go:117] "RemoveContainer" containerID="cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb" Feb 23 13:13:25.812433 master-0 kubenswrapper[7784]: I0223 13:13:25.812311 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:13:25.864491 master-0 kubenswrapper[7784]: I0223 13:13:25.863575 7784 scope.go:117] "RemoveContainer" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" Feb 23 13:13:25.924489 master-0 kubenswrapper[7784]: I0223 13:13:25.924450 7784 scope.go:117] "RemoveContainer" containerID="cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb" Feb 23 13:13:25.925242 master-0 kubenswrapper[7784]: E0223 13:13:25.925199 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb\": container with ID starting with cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb not found: ID does not exist" containerID="cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb" Feb 23 13:13:25.925296 master-0 kubenswrapper[7784]: I0223 13:13:25.925256 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb"} err="failed to get container status \"cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb\": rpc error: code = NotFound desc = could not find container \"cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb\": container with ID starting with cf286ec8fc21b60d1dc7a9bc8c70e8e17dbc351e78bcc6878f5a4f2a0eb002fb not found: ID does not exist" Feb 23 13:13:25.925296 master-0 kubenswrapper[7784]: I0223 13:13:25.925289 7784 scope.go:117] "RemoveContainer" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" Feb 23 13:13:25.926966 master-0 kubenswrapper[7784]: E0223 13:13:25.926918 7784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357\": container with ID starting with 1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357 not found: ID does not exist" containerID="1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357" Feb 23 13:13:25.927027 master-0 kubenswrapper[7784]: I0223 13:13:25.926953 7784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357"} err="failed to get container status \"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357\": rpc error: code = NotFound desc = could not find container \"1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357\": container with ID starting with 1755ccf8a0898f9b65a8d70fe51f67ebf4e0228a7913bef4449be32f1bc85357 not found: ID does not exist" Feb 23 13:13:26.700065 master-0 kubenswrapper[7784]: I0223 13:13:26.700008 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:26.700065 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:26.700065 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:26.700065 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:26.700330 master-0 kubenswrapper[7784]: I0223 13:13:26.700096 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:26.829878 master-0 kubenswrapper[7784]: I0223 13:13:26.829798 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"c95d7390704ea251dc7769d1855655b41d14a57055810b87414232733e52ca76"} Feb 23 13:13:26.829878 master-0 kubenswrapper[7784]: I0223 13:13:26.829882 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:13:26.830386 master-0 kubenswrapper[7784]: I0223 13:13:26.829900 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"91687f767ec0a818591899bcd9752bd7650e8ae309c3b19533204110ae03e018"} Feb 23 13:13:26.830386 master-0 kubenswrapper[7784]: I0223 13:13:26.829914 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"c7094c932ff3ee165f80299c697532f08bd592736188ecad774f00acf21ea126"} Feb 23 13:13:26.853546 master-0 kubenswrapper[7784]: I0223 13:13:26.853152 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.853124942 podStartE2EDuration="2.853124942s" podCreationTimestamp="2026-02-23 13:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:26.850441286 +0000 UTC m=+749.585294949" watchObservedRunningTime="2026-02-23 13:13:26.853124942 +0000 UTC m=+749.587978585" Feb 23 13:13:27.230086 master-0 kubenswrapper[7784]: I0223 13:13:27.230007 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:13:27.384328 master-0 kubenswrapper[7784]: I0223 13:13:27.384276 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir\") pod \"abccfbee-41f4-4557-b953-eb6e719aee31\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " Feb 23 13:13:27.384725 master-0 kubenswrapper[7784]: I0223 13:13:27.384417 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "abccfbee-41f4-4557-b953-eb6e719aee31" (UID: "abccfbee-41f4-4557-b953-eb6e719aee31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:27.384801 master-0 kubenswrapper[7784]: I0223 13:13:27.384685 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock\") pod \"abccfbee-41f4-4557-b953-eb6e719aee31\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " Feb 23 13:13:27.384886 master-0 kubenswrapper[7784]: I0223 13:13:27.384857 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access\") pod \"abccfbee-41f4-4557-b953-eb6e719aee31\" (UID: \"abccfbee-41f4-4557-b953-eb6e719aee31\") " Feb 23 13:13:27.384990 master-0 kubenswrapper[7784]: I0223 13:13:27.384960 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock" (OuterVolumeSpecName: "var-lock") pod "abccfbee-41f4-4557-b953-eb6e719aee31" (UID: "abccfbee-41f4-4557-b953-eb6e719aee31"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:27.385421 master-0 kubenswrapper[7784]: I0223 13:13:27.385389 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:27.385421 master-0 kubenswrapper[7784]: I0223 13:13:27.385412 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/abccfbee-41f4-4557-b953-eb6e719aee31-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:27.388558 master-0 kubenswrapper[7784]: I0223 13:13:27.388284 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "abccfbee-41f4-4557-b953-eb6e719aee31" (UID: "abccfbee-41f4-4557-b953-eb6e719aee31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:13:27.487081 master-0 kubenswrapper[7784]: I0223 13:13:27.486915 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abccfbee-41f4-4557-b953-eb6e719aee31-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:27.701869 master-0 kubenswrapper[7784]: I0223 13:13:27.701784 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:27.701869 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:27.701869 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:27.701869 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:27.702210 master-0 kubenswrapper[7784]: I0223 13:13:27.701895 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:27.839318 master-0 kubenswrapper[7784]: I0223 13:13:27.839230 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:13:27.839318 master-0 kubenswrapper[7784]: I0223 13:13:27.839251 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"abccfbee-41f4-4557-b953-eb6e719aee31","Type":"ContainerDied","Data":"5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac"} Feb 23 13:13:27.840001 master-0 kubenswrapper[7784]: I0223 13:13:27.839331 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac" Feb 23 13:13:28.515181 master-0 kubenswrapper[7784]: I0223 13:13:28.515077 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:28.542332 master-0 kubenswrapper[7784]: I0223 13:13:28.542257 7784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a5b53ced-8da1-40ed-aa51-af83fce8da58" Feb 23 13:13:28.542332 master-0 kubenswrapper[7784]: I0223 13:13:28.542316 7784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a5b53ced-8da1-40ed-aa51-af83fce8da58" Feb 23 13:13:28.560989 master-0 kubenswrapper[7784]: I0223 13:13:28.560191 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:13:28.561329 master-0 kubenswrapper[7784]: I0223 13:13:28.561282 7784 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:28.566951 master-0 kubenswrapper[7784]: I0223 13:13:28.566857 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:13:28.581719 master-0 kubenswrapper[7784]: I0223 13:13:28.581663 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:28.587529 master-0 kubenswrapper[7784]: I0223 13:13:28.587450 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:13:28.633569 master-0 kubenswrapper[7784]: W0223 13:13:28.633479 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e061f9d09dab5dbaef15b3f1e67a0.slice/crio-5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584 WatchSource:0}: Error finding container 5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584: Status 404 returned error can't find the container with id 5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584 Feb 23 13:13:28.701596 master-0 kubenswrapper[7784]: I0223 13:13:28.701503 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:28.701596 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:28.701596 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:28.701596 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:28.702105 master-0 kubenswrapper[7784]: I0223 13:13:28.701590 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:28.859606 master-0 kubenswrapper[7784]: I0223 13:13:28.859539 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584"} Feb 23 13:13:29.704427 master-0 kubenswrapper[7784]: I0223 13:13:29.701360 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:29.704427 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:29.704427 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:29.704427 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:29.704427 master-0 kubenswrapper[7784]: I0223 13:13:29.701420 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:29.849673 master-0 kubenswrapper[7784]: I0223 13:13:29.849611 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:29.849950 master-0 kubenswrapper[7784]: E0223 13:13:29.849921 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:13:29.849997 master-0 kubenswrapper[7784]: I0223 13:13:29.849951 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:13:29.850155 master-0 kubenswrapper[7784]: I0223 13:13:29.850134 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:13:29.850718 master-0 kubenswrapper[7784]: I0223 13:13:29.850693 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:29.868905 master-0 kubenswrapper[7784]: I0223 13:13:29.868861 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"225b72ffe810de606c91db96bda704162eba140695b0d114f42ad9b5f7338027"} Feb 23 13:13:29.869277 master-0 kubenswrapper[7784]: I0223 13:13:29.868916 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"318802d3ffb8951642b7de6e2fcdce57f2f19df5bc9dbb49de74dac1fb692661"} Feb 23 13:13:29.869277 master-0 kubenswrapper[7784]: I0223 13:13:29.868930 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"6d1d2a690e1d1c47fa4cec1c840fe9083bc8bf1097a1a9a0b84ede40886e22da"} Feb 23 13:13:29.869277 master-0 kubenswrapper[7784]: I0223 13:13:29.868942 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b"} Feb 23 13:13:29.906274 master-0 kubenswrapper[7784]: I0223 13:13:29.906084 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:29.933204 master-0 kubenswrapper[7784]: I0223 13:13:29.933120 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:29.933427 master-0 kubenswrapper[7784]: I0223 13:13:29.933225 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:29.933427 master-0 kubenswrapper[7784]: I0223 13:13:29.933335 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.025029 master-0 kubenswrapper[7784]: I0223 13:13:30.024919 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.02489596 podStartE2EDuration="2.02489596s" podCreationTimestamp="2026-02-23 13:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:30.021427786 +0000 UTC m=+752.756281459" watchObservedRunningTime="2026-02-23 13:13:30.02489596 +0000 UTC m=+752.759749603" Feb 23 13:13:30.035271 master-0 kubenswrapper[7784]: I0223 13:13:30.035205 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.035458 master-0 kubenswrapper[7784]: I0223 13:13:30.035312 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.035458 master-0 kubenswrapper[7784]: I0223 13:13:30.035430 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.035566 master-0 kubenswrapper[7784]: I0223 13:13:30.035531 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.035902 master-0 kubenswrapper[7784]: I0223 13:13:30.035868 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.067413 master-0 kubenswrapper[7784]: I0223 13:13:30.066998 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.167489 master-0 kubenswrapper[7784]: I0223 13:13:30.167350 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:30.549842 master-0 kubenswrapper[7784]: I0223 13:13:30.549807 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:30.700052 master-0 kubenswrapper[7784]: I0223 13:13:30.699981 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:30.700052 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:30.700052 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:30.700052 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:30.700052 master-0 kubenswrapper[7784]: I0223 13:13:30.700045 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:30.875377 master-0 kubenswrapper[7784]: I0223 13:13:30.875303 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be03a0b7-337b-4809-9dab-8e2314cf6bfa","Type":"ContainerStarted","Data":"622495d94388d10205167a6a39534dd408a2ab414c959f4699b04e192501d432"} Feb 23 13:13:30.875907 master-0 kubenswrapper[7784]: I0223 13:13:30.875388 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be03a0b7-337b-4809-9dab-8e2314cf6bfa","Type":"ContainerStarted","Data":"348deaaa69f0f97e2e194ec5202721da8c8c456b6143087c58a99942a091251a"} Feb 23 13:13:30.895498 master-0 kubenswrapper[7784]: I0223 13:13:30.893489 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.893466645 podStartE2EDuration="1.893466645s" podCreationTimestamp="2026-02-23 13:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:30.889405645 +0000 UTC m=+753.624259288" watchObservedRunningTime="2026-02-23 13:13:30.893466645 +0000 UTC m=+753.628320298" Feb 23 13:13:31.700173 master-0 kubenswrapper[7784]: I0223 13:13:31.700110 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:31.700173 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:31.700173 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:31.700173 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:31.700463 master-0 kubenswrapper[7784]: I0223 13:13:31.700208 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:32.700576 master-0 kubenswrapper[7784]: I0223 13:13:32.700490 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:32.700576 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:32.700576 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:32.700576 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:32.701156 master-0 kubenswrapper[7784]: I0223 13:13:32.700605 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:33.701590 master-0 kubenswrapper[7784]: I0223 13:13:33.701522 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:33.701590 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:33.701590 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:33.701590 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:33.702152 master-0 kubenswrapper[7784]: I0223 13:13:33.701613 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:34.699667 master-0 kubenswrapper[7784]: I0223 13:13:34.699604 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:34.699667 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:34.699667 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:34.699667 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:34.699667 master-0 kubenswrapper[7784]: I0223 13:13:34.699663 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:35.700874 master-0 kubenswrapper[7784]: I0223 13:13:35.700782 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:35.700874 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:35.700874 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:35.700874 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:35.700874 master-0 kubenswrapper[7784]: I0223 13:13:35.700851 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:36.700029 master-0 kubenswrapper[7784]: I0223 13:13:36.699950 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:36.700029 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:36.700029 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:36.700029 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:36.700286 master-0 kubenswrapper[7784]: I0223 13:13:36.700029 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:37.702501 master-0 kubenswrapper[7784]: I0223 13:13:37.702409 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:37.702501 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:37.702501 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:37.702501 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:37.703235 master-0 kubenswrapper[7784]: I0223 13:13:37.702547 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:38.582157 master-0 kubenswrapper[7784]: I0223 13:13:38.582058 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:38.582157 master-0 kubenswrapper[7784]: I0223 13:13:38.582161 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:38.582157 master-0 kubenswrapper[7784]: I0223 13:13:38.582177 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:38.582625 master-0 kubenswrapper[7784]: I0223 13:13:38.582190 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:38.582756 master-0 kubenswrapper[7784]: I0223 13:13:38.582679 7784 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:13:38.582829 master-0 kubenswrapper[7784]: I0223 13:13:38.582790 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:13:38.586602 master-0 kubenswrapper[7784]: I0223 13:13:38.586566 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:38.701066 master-0 kubenswrapper[7784]: I0223 13:13:38.700977 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:38.701066 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:38.701066 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:38.701066 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:38.701407 master-0 kubenswrapper[7784]: I0223 13:13:38.701076 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:39.701713 master-0 kubenswrapper[7784]: I0223 13:13:39.701582 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:39.701713 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:39.701713 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:39.701713 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:39.701713 master-0 kubenswrapper[7784]: I0223 13:13:39.701704 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:40.701873 master-0 kubenswrapper[7784]: I0223 13:13:40.701758 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:40.701873 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:40.701873 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:40.701873 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:40.702853 master-0 kubenswrapper[7784]: I0223 13:13:40.701886 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:41.651564 master-0 kubenswrapper[7784]: I0223 13:13:41.651461 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:41.651982 master-0 kubenswrapper[7784]: I0223 13:13:41.651897 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" containerName="installer" containerID="cri-o://622495d94388d10205167a6a39534dd408a2ab414c959f4699b04e192501d432" gracePeriod=30 Feb 23 13:13:41.703865 master-0 kubenswrapper[7784]: I0223 13:13:41.702045 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:41.703865 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:41.703865 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:41.703865 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:41.703865 master-0 kubenswrapper[7784]: I0223 13:13:41.702181 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:41.968091 master-0 kubenswrapper[7784]: I0223 13:13:41.967947 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_be03a0b7-337b-4809-9dab-8e2314cf6bfa/installer/0.log" Feb 23 13:13:41.968091 master-0 kubenswrapper[7784]: I0223 13:13:41.968004 7784 generic.go:334] "Generic (PLEG): container finished" podID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" containerID="622495d94388d10205167a6a39534dd408a2ab414c959f4699b04e192501d432" exitCode=1 Feb 23 13:13:41.968091 master-0 kubenswrapper[7784]: I0223 13:13:41.968035 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be03a0b7-337b-4809-9dab-8e2314cf6bfa","Type":"ContainerDied","Data":"622495d94388d10205167a6a39534dd408a2ab414c959f4699b04e192501d432"} Feb 23 13:13:42.085075 master-0 kubenswrapper[7784]: I0223 13:13:42.084975 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_be03a0b7-337b-4809-9dab-8e2314cf6bfa/installer/0.log" Feb 23 13:13:42.085276 master-0 kubenswrapper[7784]: I0223 13:13:42.085125 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:42.219185 master-0 kubenswrapper[7784]: I0223 13:13:42.219038 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir\") pod \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " Feb 23 13:13:42.219574 master-0 kubenswrapper[7784]: I0223 13:13:42.219545 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access\") pod \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " Feb 23 13:13:42.219719 master-0 kubenswrapper[7784]: I0223 13:13:42.219701 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock\") pod \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\" (UID: \"be03a0b7-337b-4809-9dab-8e2314cf6bfa\") " Feb 23 13:13:42.219928 master-0 kubenswrapper[7784]: I0223 13:13:42.219213 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be03a0b7-337b-4809-9dab-8e2314cf6bfa" (UID: "be03a0b7-337b-4809-9dab-8e2314cf6bfa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42.219974 master-0 kubenswrapper[7784]: I0223 13:13:42.219812 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock" (OuterVolumeSpecName: "var-lock") pod "be03a0b7-337b-4809-9dab-8e2314cf6bfa" (UID: "be03a0b7-337b-4809-9dab-8e2314cf6bfa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42.220334 master-0 kubenswrapper[7784]: I0223 13:13:42.220286 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:42.220400 master-0 kubenswrapper[7784]: I0223 13:13:42.220329 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be03a0b7-337b-4809-9dab-8e2314cf6bfa-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:42.222604 master-0 kubenswrapper[7784]: I0223 13:13:42.222534 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be03a0b7-337b-4809-9dab-8e2314cf6bfa" (UID: "be03a0b7-337b-4809-9dab-8e2314cf6bfa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:13:42.321408 master-0 kubenswrapper[7784]: I0223 13:13:42.321321 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be03a0b7-337b-4809-9dab-8e2314cf6bfa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:42.701094 master-0 kubenswrapper[7784]: I0223 13:13:42.700999 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:42.701094 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:42.701094 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:42.701094 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:42.701448 master-0 kubenswrapper[7784]: I0223 13:13:42.701097 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:42.980972 master-0 kubenswrapper[7784]: I0223 13:13:42.980794 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_be03a0b7-337b-4809-9dab-8e2314cf6bfa/installer/0.log" Feb 23 13:13:42.981539 master-0 kubenswrapper[7784]: I0223 13:13:42.981016 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:13:42.981539 master-0 kubenswrapper[7784]: I0223 13:13:42.980973 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be03a0b7-337b-4809-9dab-8e2314cf6bfa","Type":"ContainerDied","Data":"348deaaa69f0f97e2e194ec5202721da8c8c456b6143087c58a99942a091251a"} Feb 23 13:13:42.981539 master-0 kubenswrapper[7784]: I0223 13:13:42.981154 7784 scope.go:117] "RemoveContainer" containerID="622495d94388d10205167a6a39534dd408a2ab414c959f4699b04e192501d432" Feb 23 13:13:43.026016 master-0 kubenswrapper[7784]: I0223 13:13:43.025925 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:43.044931 master-0 kubenswrapper[7784]: I0223 13:13:43.044808 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:13:43.524841 master-0 kubenswrapper[7784]: I0223 13:13:43.524792 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" path="/var/lib/kubelet/pods/be03a0b7-337b-4809-9dab-8e2314cf6bfa/volumes" Feb 23 13:13:43.700617 master-0 kubenswrapper[7784]: I0223 13:13:43.700512 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:43.700617 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:43.700617 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:43.700617 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:43.701200 master-0 kubenswrapper[7784]: I0223 13:13:43.700624 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:44.701361 master-0 kubenswrapper[7784]: I0223 13:13:44.701286 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:44.701361 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:44.701361 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:44.701361 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:44.701994 master-0 kubenswrapper[7784]: I0223 13:13:44.701367 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:45.702066 master-0 kubenswrapper[7784]: I0223 13:13:45.701911 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:45.702066 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:45.702066 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:45.702066 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:45.702066 master-0 kubenswrapper[7784]: I0223 13:13:45.702046 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:46.051543 master-0 kubenswrapper[7784]: I0223 13:13:46.051439 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:13:46.052183 master-0 kubenswrapper[7784]: E0223 13:13:46.051857 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" containerName="installer" Feb 23 13:13:46.052183 master-0 kubenswrapper[7784]: I0223 13:13:46.051888 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" containerName="installer" Feb 23 13:13:46.052183 master-0 kubenswrapper[7784]: I0223 13:13:46.052166 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="be03a0b7-337b-4809-9dab-8e2314cf6bfa" containerName="installer" Feb 23 13:13:46.053112 master-0 kubenswrapper[7784]: I0223 13:13:46.053032 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.123748 master-0 kubenswrapper[7784]: I0223 13:13:46.076853 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:13:46.127037 master-0 kubenswrapper[7784]: I0223 13:13:46.126464 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.127037 master-0 kubenswrapper[7784]: I0223 13:13:46.126594 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.127037 master-0 kubenswrapper[7784]: I0223 13:13:46.126753 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.228868 master-0 kubenswrapper[7784]: I0223 13:13:46.228762 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.229243 master-0 kubenswrapper[7784]: I0223 13:13:46.228979 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.229243 master-0 kubenswrapper[7784]: I0223 13:13:46.229099 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.229243 master-0 kubenswrapper[7784]: I0223 13:13:46.229207 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.229734 master-0 kubenswrapper[7784]: I0223 13:13:46.229125 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.253144 master-0 kubenswrapper[7784]: I0223 13:13:46.253032 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.444491 master-0 kubenswrapper[7784]: I0223 13:13:46.444283 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:13:46.701903 master-0 kubenswrapper[7784]: I0223 13:13:46.701720 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:46.701903 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:46.701903 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:46.701903 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:46.701903 master-0 kubenswrapper[7784]: I0223 13:13:46.701807 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:46.935850 master-0 kubenswrapper[7784]: I0223 13:13:46.935085 7784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:13:47.020122 master-0 kubenswrapper[7784]: I0223 13:13:47.020045 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"27c1e327-cb40-4b36-b371-20d1271b8d8d","Type":"ContainerStarted","Data":"6f9da06ee23b6cb8b9623f7a51ebd1e82f9f88d5c18ad94ee1191bb007985ffa"} Feb 23 13:13:47.701584 master-0 kubenswrapper[7784]: I0223 13:13:47.701525 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:47.701584 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:47.701584 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:47.701584 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:47.701931 master-0 kubenswrapper[7784]: I0223 13:13:47.701593 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:48.028654 master-0 kubenswrapper[7784]: I0223 13:13:48.028566 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"27c1e327-cb40-4b36-b371-20d1271b8d8d","Type":"ContainerStarted","Data":"6b29af04ef4cfc936396d3b8f81eed64ef8bee70e9754f067615d5e03a3e066c"} Feb 23 13:13:48.059821 master-0 kubenswrapper[7784]: I0223 13:13:48.059667 7784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.05964371 podStartE2EDuration="2.05964371s" podCreationTimestamp="2026-02-23 13:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:48.052177578 +0000 UTC m=+770.787031251" watchObservedRunningTime="2026-02-23 13:13:48.05964371 +0000 UTC m=+770.794497363" Feb 23 13:13:48.582585 master-0 kubenswrapper[7784]: I0223 13:13:48.582522 7784 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:13:48.582992 master-0 kubenswrapper[7784]: I0223 13:13:48.582948 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:13:48.590160 master-0 kubenswrapper[7784]: I0223 13:13:48.590117 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:48.701420 master-0 kubenswrapper[7784]: I0223 13:13:48.701320 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:48.701420 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:48.701420 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:48.701420 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:48.701884 master-0 kubenswrapper[7784]: I0223 13:13:48.701424 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:49.702525 master-0 kubenswrapper[7784]: I0223 13:13:49.702425 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:49.702525 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:49.702525 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:49.702525 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:49.703566 master-0 kubenswrapper[7784]: I0223 13:13:49.702527 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:50.701233 master-0 kubenswrapper[7784]: I0223 13:13:50.701151 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:50.701233 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:50.701233 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:50.701233 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:50.701542 master-0 kubenswrapper[7784]: I0223 13:13:50.701269 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:51.065082 master-0 kubenswrapper[7784]: I0223 13:13:51.065002 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_045ca7a8-edf0-4476-a195-f100aaf403cc/installer/0.log" Feb 23 13:13:51.065082 master-0 kubenswrapper[7784]: I0223 13:13:51.065073 7784 generic.go:334] "Generic (PLEG): container finished" podID="045ca7a8-edf0-4476-a195-f100aaf403cc" containerID="c2a3282a0425d31fa975719eb30b221dd8f485e7e9f360fd96b2586582e8a439" exitCode=1 Feb 23 13:13:51.066017 master-0 kubenswrapper[7784]: I0223 13:13:51.065102 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"045ca7a8-edf0-4476-a195-f100aaf403cc","Type":"ContainerDied","Data":"c2a3282a0425d31fa975719eb30b221dd8f485e7e9f360fd96b2586582e8a439"} Feb 23 13:13:51.332950 master-0 kubenswrapper[7784]: I0223 13:13:51.332807 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_045ca7a8-edf0-4476-a195-f100aaf403cc/installer/0.log" Feb 23 13:13:51.332950 master-0 kubenswrapper[7784]: I0223 13:13:51.332924 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:51.430126 master-0 kubenswrapper[7784]: I0223 13:13:51.430035 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access\") pod \"045ca7a8-edf0-4476-a195-f100aaf403cc\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " Feb 23 13:13:51.430320 master-0 kubenswrapper[7784]: I0223 13:13:51.430153 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir\") pod \"045ca7a8-edf0-4476-a195-f100aaf403cc\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " Feb 23 13:13:51.430424 master-0 kubenswrapper[7784]: I0223 13:13:51.430386 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock\") pod \"045ca7a8-edf0-4476-a195-f100aaf403cc\" (UID: \"045ca7a8-edf0-4476-a195-f100aaf403cc\") " Feb 23 13:13:51.430547 master-0 kubenswrapper[7784]: I0223 13:13:51.430483 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "045ca7a8-edf0-4476-a195-f100aaf403cc" (UID: "045ca7a8-edf0-4476-a195-f100aaf403cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:51.430658 master-0 kubenswrapper[7784]: I0223 13:13:51.430590 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "045ca7a8-edf0-4476-a195-f100aaf403cc" (UID: "045ca7a8-edf0-4476-a195-f100aaf403cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:51.431520 master-0 kubenswrapper[7784]: I0223 13:13:51.431475 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:51.431570 master-0 kubenswrapper[7784]: I0223 13:13:51.431526 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/045ca7a8-edf0-4476-a195-f100aaf403cc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:51.434057 master-0 kubenswrapper[7784]: I0223 13:13:51.433970 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "045ca7a8-edf0-4476-a195-f100aaf403cc" (UID: "045ca7a8-edf0-4476-a195-f100aaf403cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:13:51.533929 master-0 kubenswrapper[7784]: I0223 13:13:51.533841 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045ca7a8-edf0-4476-a195-f100aaf403cc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:13:51.702964 master-0 kubenswrapper[7784]: I0223 13:13:51.702716 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:51.702964 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:51.702964 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:51.702964 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:51.703446 master-0 kubenswrapper[7784]: I0223 13:13:51.702939 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:52.078157 master-0 kubenswrapper[7784]: I0223 13:13:52.078064 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_045ca7a8-edf0-4476-a195-f100aaf403cc/installer/0.log" Feb 23 13:13:52.078157 master-0 kubenswrapper[7784]: I0223 13:13:52.078168 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"045ca7a8-edf0-4476-a195-f100aaf403cc","Type":"ContainerDied","Data":"5e3bd25dbe6559d87310d6a6eb1aaa87a045034f74e111f053c8eaa27e2f380e"} Feb 23 13:13:52.079056 master-0 kubenswrapper[7784]: I0223 13:13:52.078237 7784 scope.go:117] "RemoveContainer" containerID="c2a3282a0425d31fa975719eb30b221dd8f485e7e9f360fd96b2586582e8a439" Feb 23 13:13:52.079056 master-0 kubenswrapper[7784]: I0223 13:13:52.078273 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:13:52.106895 master-0 kubenswrapper[7784]: I0223 13:13:52.106800 7784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:52.115030 master-0 kubenswrapper[7784]: I0223 13:13:52.114940 7784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:13:52.702275 master-0 kubenswrapper[7784]: I0223 13:13:52.702185 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:52.702275 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:52.702275 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:52.702275 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:52.702750 master-0 kubenswrapper[7784]: I0223 13:13:52.702279 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:53.530945 master-0 kubenswrapper[7784]: I0223 13:13:53.530815 7784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="045ca7a8-edf0-4476-a195-f100aaf403cc" path="/var/lib/kubelet/pods/045ca7a8-edf0-4476-a195-f100aaf403cc/volumes" Feb 23 13:13:53.701492 master-0 kubenswrapper[7784]: I0223 13:13:53.701408 7784 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-kcfgf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:13:53.701492 master-0 kubenswrapper[7784]: [-]has-synced failed: reason withheld Feb 23 13:13:53.701492 master-0 kubenswrapper[7784]: [+]process-running ok Feb 23 13:13:53.701492 master-0 kubenswrapper[7784]: healthz check failed Feb 23 13:13:53.701877 master-0 kubenswrapper[7784]: I0223 13:13:53.701500 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:13:53.701877 master-0 kubenswrapper[7784]: I0223 13:13:53.701560 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:13:53.702552 master-0 kubenswrapper[7784]: I0223 13:13:53.702503 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"cf7e22147b726d7bb900d92e5a79955383f2346325db290ec3e45f21c5be3266"} pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" containerMessage="Container router failed startup probe, will be restarted" Feb 23 13:13:53.702650 master-0 kubenswrapper[7784]: I0223 13:13:53.702566 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" podUID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerName="router" containerID="cri-o://cf7e22147b726d7bb900d92e5a79955383f2346325db290ec3e45f21c5be3266" gracePeriod=3600 Feb 23 13:13:58.583440 master-0 kubenswrapper[7784]: I0223 13:13:58.582963 7784 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:13:58.583440 master-0 kubenswrapper[7784]: I0223 13:13:58.583088 7784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:13:58.583440 master-0 kubenswrapper[7784]: I0223 13:13:58.583164 7784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:13:58.585514 master-0 kubenswrapper[7784]: I0223 13:13:58.585438 7784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 13:13:58.586014 master-0 kubenswrapper[7784]: I0223 13:13:58.585901 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" containerID="cri-o://2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b" gracePeriod=30 Feb 23 13:14:15.056743 master-0 kubenswrapper[7784]: I0223 13:14:15.056671 7784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:28.740580 master-0 kubenswrapper[7784]: E0223 13:14:28.740333 7784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e061f9d09dab5dbaef15b3f1e67a0.slice/crio-2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e061f9d09dab5dbaef15b3f1e67a0.slice/crio-conmon-2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:14:29.394324 master-0 kubenswrapper[7784]: I0223 13:14:29.394250 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/0.log" Feb 23 13:14:29.394515 master-0 kubenswrapper[7784]: I0223 13:14:29.394369 7784 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b" exitCode=137 Feb 23 13:14:29.394515 master-0 kubenswrapper[7784]: I0223 13:14:29.394425 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerDied","Data":"2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b"} Feb 23 13:14:30.405799 master-0 kubenswrapper[7784]: I0223 13:14:30.405699 7784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/0.log" Feb 23 13:14:30.406716 master-0 kubenswrapper[7784]: I0223 13:14:30.405813 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71"} Feb 23 13:14:35.216063 master-0 kubenswrapper[7784]: I0223 13:14:35.215999 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:14:35.217212 master-0 kubenswrapper[7784]: E0223 13:14:35.217194 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045ca7a8-edf0-4476-a195-f100aaf403cc" containerName="installer" Feb 23 13:14:35.217292 master-0 kubenswrapper[7784]: I0223 13:14:35.217281 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="045ca7a8-edf0-4476-a195-f100aaf403cc" containerName="installer" Feb 23 13:14:35.217516 master-0 kubenswrapper[7784]: I0223 13:14:35.217502 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="045ca7a8-edf0-4476-a195-f100aaf403cc" containerName="installer" Feb 23 13:14:35.218030 master-0 kubenswrapper[7784]: I0223 13:14:35.218015 7784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:14:35.218294 master-0 kubenswrapper[7784]: I0223 13:14:35.218219 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.218478 master-0 kubenswrapper[7784]: I0223 13:14:35.218452 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8" gracePeriod=15 Feb 23 13:14:35.218692 master-0 kubenswrapper[7784]: I0223 13:14:35.218552 7784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab" gracePeriod=15 Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.219733 7784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: E0223 13:14:35.220064 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220079 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: E0223 13:14:35.220093 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220100 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: E0223 13:14:35.220129 7784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220136 7784 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220253 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220279 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.220290 7784 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:14:35.221973 master-0 kubenswrapper[7784]: I0223 13:14:35.221945 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.279851 master-0 kubenswrapper[7784]: I0223 13:14:35.279692 7784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:14:35.296456 master-0 kubenswrapper[7784]: E0223 13:14:35.296383 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.347950 master-0 kubenswrapper[7784]: I0223 13:14:35.347895 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.348032 master-0 kubenswrapper[7784]: I0223 13:14:35.347964 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.348032 master-0 kubenswrapper[7784]: I0223 13:14:35.347997 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.348032 master-0 kubenswrapper[7784]: I0223 13:14:35.348026 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.348240 master-0 kubenswrapper[7784]: I0223 13:14:35.348213 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.348320 master-0 kubenswrapper[7784]: I0223 13:14:35.348295 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.348374 master-0 kubenswrapper[7784]: I0223 13:14:35.348319 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.348374 master-0 kubenswrapper[7784]: I0223 13:14:35.348367 7784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460076 master-0 kubenswrapper[7784]: I0223 13:14:35.459994 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460106 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460177 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460201 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460249 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460281 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.460331 master-0 kubenswrapper[7784]: I0223 13:14:35.460318 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460617 master-0 kubenswrapper[7784]: I0223 13:14:35.460411 7784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.460778 master-0 kubenswrapper[7784]: I0223 13:14:35.460722 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.460942 master-0 kubenswrapper[7784]: I0223 13:14:35.460830 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.461051 master-0 kubenswrapper[7784]: I0223 13:14:35.460890 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.461190 master-0 kubenswrapper[7784]: I0223 13:14:35.461060 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.461363 master-0 kubenswrapper[7784]: I0223 13:14:35.461308 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.461565 master-0 kubenswrapper[7784]: I0223 13:14:35.461316 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.461565 master-0 kubenswrapper[7784]: I0223 13:14:35.461502 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.461651 master-0 kubenswrapper[7784]: I0223 13:14:35.461581 7784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.462420 master-0 kubenswrapper[7784]: I0223 13:14:35.462383 7784 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab" exitCode=0 Feb 23 13:14:35.467133 master-0 kubenswrapper[7784]: I0223 13:14:35.467019 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"27c1e327-cb40-4b36-b371-20d1271b8d8d","Type":"ContainerDied","Data":"6b29af04ef4cfc936396d3b8f81eed64ef8bee70e9754f067615d5e03a3e066c"} Feb 23 13:14:35.467631 master-0 kubenswrapper[7784]: I0223 13:14:35.466956 7784 generic.go:334] "Generic (PLEG): container finished" podID="27c1e327-cb40-4b36-b371-20d1271b8d8d" containerID="6b29af04ef4cfc936396d3b8f81eed64ef8bee70e9754f067615d5e03a3e066c" exitCode=0 Feb 23 13:14:35.469772 master-0 kubenswrapper[7784]: I0223 13:14:35.469688 7784 status_manager.go:851] "Failed to get status for pod" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:35.470546 master-0 kubenswrapper[7784]: I0223 13:14:35.470486 7784 status_manager.go:851] "Failed to get status for pod" podUID="5c4f5d60772fa42f26e9c219bffa62b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:35.564299 master-0 kubenswrapper[7784]: I0223 13:14:35.564146 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:35.597609 master-0 kubenswrapper[7784]: I0223 13:14:35.597523 7784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:35.605198 master-0 kubenswrapper[7784]: W0223 13:14:35.605063 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c4f5d60772fa42f26e9c219bffa62b9.slice/crio-052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3 WatchSource:0}: Error finding container 052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3: Status 404 returned error can't find the container with id 052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3 Feb 23 13:14:35.615711 master-0 kubenswrapper[7784]: E0223 13:14:35.615490 7784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e2705b5b10c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:5c4f5d60772fa42f26e9c219bffa62b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:14:35.613778116 +0000 UTC m=+818.348631799,LastTimestamp:2026-02-23 13:14:35.613778116 +0000 UTC m=+818.348631799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:14:35.632918 master-0 kubenswrapper[7784]: W0223 13:14:35.632847 7784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb342c942d3d92fd08ed7cf68fafb94c.slice/crio-6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579 WatchSource:0}: Error finding container 6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579: Status 404 returned error can't find the container with id 6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579 Feb 23 13:14:36.437935 master-0 kubenswrapper[7784]: E0223 13:14:36.437706 7784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e2705b5b10c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:5c4f5d60772fa42f26e9c219bffa62b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:14:35.613778116 +0000 UTC m=+818.348631799,LastTimestamp:2026-02-23 13:14:35.613778116 +0000 UTC m=+818.348631799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:14:36.479990 master-0 kubenswrapper[7784]: I0223 13:14:36.479903 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71"} Feb 23 13:14:36.480217 master-0 kubenswrapper[7784]: I0223 13:14:36.480023 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3"} Feb 23 13:14:36.482377 master-0 kubenswrapper[7784]: I0223 13:14:36.482274 7784 status_manager.go:851] "Failed to get status for pod" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.483010 master-0 kubenswrapper[7784]: I0223 13:14:36.482912 7784 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757" exitCode=0 Feb 23 13:14:36.483226 master-0 kubenswrapper[7784]: I0223 13:14:36.483138 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757"} Feb 23 13:14:36.483284 master-0 kubenswrapper[7784]: I0223 13:14:36.483252 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579"} Feb 23 13:14:36.483967 master-0 kubenswrapper[7784]: I0223 13:14:36.483731 7784 status_manager.go:851] "Failed to get status for pod" podUID="5c4f5d60772fa42f26e9c219bffa62b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.484982 master-0 kubenswrapper[7784]: I0223 13:14:36.484929 7784 status_manager.go:851] "Failed to get status for pod" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.485174 master-0 kubenswrapper[7784]: E0223 13:14:36.484932 7784 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:36.486169 master-0 kubenswrapper[7784]: I0223 13:14:36.486091 7784 status_manager.go:851] "Failed to get status for pod" podUID="5c4f5d60772fa42f26e9c219bffa62b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.880036 master-0 kubenswrapper[7784]: I0223 13:14:36.879982 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:36.881427 master-0 kubenswrapper[7784]: I0223 13:14:36.881322 7784 status_manager.go:851] "Failed to get status for pod" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.882253 master-0 kubenswrapper[7784]: I0223 13:14:36.882204 7784 status_manager.go:851] "Failed to get status for pod" podUID="5c4f5d60772fa42f26e9c219bffa62b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:14:36.984972 master-0 kubenswrapper[7784]: I0223 13:14:36.984771 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:14:36.985160 master-0 kubenswrapper[7784]: I0223 13:14:36.984946 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock" (OuterVolumeSpecName: "var-lock") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:14:36.985160 master-0 kubenswrapper[7784]: I0223 13:14:36.985109 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:14:36.985316 master-0 kubenswrapper[7784]: I0223 13:14:36.985203 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:14:36.985405 master-0 kubenswrapper[7784]: I0223 13:14:36.985376 7784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:14:36.986027 master-0 kubenswrapper[7784]: I0223 13:14:36.985970 7784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:14:36.986085 master-0 kubenswrapper[7784]: I0223 13:14:36.986024 7784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:14:36.990414 master-0 kubenswrapper[7784]: I0223 13:14:36.990389 7784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:14:37.088282 master-0 kubenswrapper[7784]: I0223 13:14:37.088189 7784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:14:37.493767 master-0 kubenswrapper[7784]: I0223 13:14:37.493706 7784 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8" exitCode=0 Feb 23 13:14:37.495789 master-0 kubenswrapper[7784]: I0223 13:14:37.495721 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"27c1e327-cb40-4b36-b371-20d1271b8d8d","Type":"ContainerDied","Data":"6f9da06ee23b6cb8b9623f7a51ebd1e82f9f88d5c18ad94ee1191bb007985ffa"} Feb 23 13:14:37.495900 master-0 kubenswrapper[7784]: I0223 13:14:37.495792 7784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9da06ee23b6cb8b9623f7a51ebd1e82f9f88d5c18ad94ee1191bb007985ffa" Feb 23 13:14:37.495900 master-0 kubenswrapper[7784]: I0223 13:14:37.495754 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:37.498674 master-0 kubenswrapper[7784]: I0223 13:14:37.498611 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94"} Feb 23 13:14:37.498674 master-0 kubenswrapper[7784]: I0223 13:14:37.498669 7784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c"} Feb 23 13:14:38.020973 master-0 kubenswrapper[7784]: I0223 13:14:38.020911 7784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:14:38.031468 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 13:14:38.063830 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 23 13:14:38.064123 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 13:14:38.065272 master-0 systemd[1]: kubelet.service: Consumed 2min 4.765s CPU time. Feb 23 13:14:38.079046 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:14:38.202482 master-0 kubenswrapper[26474]: I0223 13:14:38.199405 26474 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205254 26474 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205296 26474 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205329 26474 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205368 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205378 26474 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205387 26474 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205394 26474 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205402 26474 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205409 26474 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205416 26474 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205423 26474 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205431 26474 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205440 26474 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205450 26474 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205457 26474 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205466 26474 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205473 26474 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205480 26474 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:14:38.205787 master-0 kubenswrapper[26474]: W0223 13:14:38.205486 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205493 26474 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205500 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205507 26474 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205514 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205521 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205527 26474 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205533 26474 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205540 26474 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205546 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205553 26474 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205560 26474 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205566 26474 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205572 26474 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205579 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205586 26474 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205592 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205599 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205610 26474 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205617 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:14:38.207083 master-0 kubenswrapper[26474]: W0223 13:14:38.205626 26474 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205633 26474 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205640 26474 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205647 26474 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205656 26474 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205664 26474 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205673 26474 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205680 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205687 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205695 26474 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205701 26474 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205738 26474 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205747 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205754 26474 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205760 26474 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205767 26474 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205773 26474 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205780 26474 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205788 26474 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:14:38.208875 master-0 kubenswrapper[26474]: W0223 13:14:38.205795 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205803 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205810 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205818 26474 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205826 26474 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205833 26474 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205840 26474 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205847 26474 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205855 26474 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205862 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205868 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205874 26474 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205882 26474 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205889 26474 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: W0223 13:14:38.205896 26474 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206021 26474 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206035 26474 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206049 26474 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206057 26474 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206065 26474 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206072 26474 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 13:14:38.210518 master-0 kubenswrapper[26474]: I0223 13:14:38.206080 26474 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206088 26474 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206095 26474 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206104 26474 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206113 26474 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206120 26474 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206126 26474 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206133 26474 flags.go:64] FLAG: --cgroup-root="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206139 26474 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206145 26474 flags.go:64] FLAG: --client-ca-file="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206151 26474 flags.go:64] FLAG: --cloud-config="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206158 26474 flags.go:64] FLAG: --cloud-provider="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206163 26474 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206171 26474 flags.go:64] FLAG: --cluster-domain="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206177 26474 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206184 26474 flags.go:64] FLAG: --config-dir="" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206190 26474 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206197 26474 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206205 26474 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206218 26474 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206224 26474 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206231 26474 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206237 26474 flags.go:64] FLAG: --contention-profiling="false" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206243 26474 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206249 26474 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 13:14:38.211920 master-0 kubenswrapper[26474]: I0223 13:14:38.206256 26474 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206262 26474 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206272 26474 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206280 26474 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206288 26474 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206296 26474 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206304 26474 flags.go:64] FLAG: --enable-server="true" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206313 26474 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206324 26474 flags.go:64] FLAG: --event-burst="100" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206331 26474 flags.go:64] FLAG: --event-qps="50" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206361 26474 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206368 26474 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206375 26474 flags.go:64] FLAG: --eviction-hard="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206383 26474 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206390 26474 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206397 26474 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206404 26474 flags.go:64] FLAG: --eviction-soft="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206411 26474 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206417 26474 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206423 26474 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206429 26474 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206435 26474 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206442 26474 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206448 26474 flags.go:64] FLAG: --feature-gates="" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206456 26474 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 13:14:38.214606 master-0 kubenswrapper[26474]: I0223 13:14:38.206463 26474 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206473 26474 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206481 26474 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206488 26474 flags.go:64] FLAG: --healthz-port="10248" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206494 26474 flags.go:64] FLAG: --help="false" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206500 26474 flags.go:64] FLAG: --hostname-override="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206506 26474 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206513 26474 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206520 26474 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206526 26474 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206532 26474 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206538 26474 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206545 26474 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206552 26474 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206559 26474 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206566 26474 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206573 26474 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206580 26474 flags.go:64] FLAG: --kube-reserved="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206587 26474 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206593 26474 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206599 26474 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206606 26474 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206613 26474 flags.go:64] FLAG: --lock-file="" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206619 26474 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206627 26474 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 13:14:38.216428 master-0 kubenswrapper[26474]: I0223 13:14:38.206634 26474 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206653 26474 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206659 26474 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206665 26474 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206671 26474 flags.go:64] FLAG: --logging-format="text" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206677 26474 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206683 26474 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206689 26474 flags.go:64] FLAG: --manifest-url="" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206698 26474 flags.go:64] FLAG: --manifest-url-header="" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206706 26474 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206712 26474 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206719 26474 flags.go:64] FLAG: --max-pods="110" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206726 26474 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206732 26474 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206738 26474 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206744 26474 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206752 26474 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206758 26474 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206764 26474 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206779 26474 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206785 26474 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206792 26474 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206798 26474 flags.go:64] FLAG: --pod-cidr="" Feb 23 13:14:38.218637 master-0 kubenswrapper[26474]: I0223 13:14:38.206804 26474 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206815 26474 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206821 26474 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206827 26474 flags.go:64] FLAG: --pods-per-core="0" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206833 26474 flags.go:64] FLAG: --port="10250" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206839 26474 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206903 26474 flags.go:64] FLAG: --provider-id="" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206911 26474 flags.go:64] FLAG: --qos-reserved="" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206917 26474 flags.go:64] FLAG: --read-only-port="10255" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206924 26474 flags.go:64] FLAG: --register-node="true" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206930 26474 flags.go:64] FLAG: --register-schedulable="true" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206936 26474 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206947 26474 flags.go:64] FLAG: --registry-burst="10" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206953 26474 flags.go:64] FLAG: --registry-qps="5" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206960 26474 flags.go:64] FLAG: --reserved-cpus="" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206966 26474 flags.go:64] FLAG: --reserved-memory="" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206973 26474 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206982 26474 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206988 26474 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.206994 26474 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207000 26474 flags.go:64] FLAG: --runonce="false" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207007 26474 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207013 26474 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207020 26474 flags.go:64] FLAG: --seccomp-default="false" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207026 26474 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207032 26474 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 13:14:38.220708 master-0 kubenswrapper[26474]: I0223 13:14:38.207039 26474 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207045 26474 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207051 26474 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207057 26474 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207063 26474 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207069 26474 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207076 26474 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207082 26474 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207089 26474 flags.go:64] FLAG: --system-cgroups="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207096 26474 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207106 26474 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207114 26474 flags.go:64] FLAG: --tls-cert-file="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207122 26474 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207133 26474 flags.go:64] FLAG: --tls-min-version="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207141 26474 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207149 26474 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207158 26474 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207167 26474 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207175 26474 flags.go:64] FLAG: --v="2" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207185 26474 flags.go:64] FLAG: --version="false" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207193 26474 flags.go:64] FLAG: --vmodule="" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207200 26474 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: I0223 13:14:38.207208 26474 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: W0223 13:14:38.207381 26474 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: W0223 13:14:38.207413 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:14:38.226568 master-0 kubenswrapper[26474]: W0223 13:14:38.207420 26474 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207425 26474 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207433 26474 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207440 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207446 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207451 26474 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207456 26474 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207461 26474 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207467 26474 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207472 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207477 26474 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207482 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207487 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207492 26474 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207497 26474 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207502 26474 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207507 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207512 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207517 26474 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:14:38.228212 master-0 kubenswrapper[26474]: W0223 13:14:38.207522 26474 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207528 26474 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207581 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207588 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207593 26474 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207599 26474 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207605 26474 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207610 26474 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207614 26474 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207619 26474 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207625 26474 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207630 26474 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207638 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207644 26474 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207649 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207654 26474 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207659 26474 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207665 26474 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207670 26474 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207675 26474 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:14:38.230815 master-0 kubenswrapper[26474]: W0223 13:14:38.207680 26474 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207685 26474 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207690 26474 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207695 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207700 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207705 26474 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207710 26474 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207717 26474 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207723 26474 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207728 26474 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207733 26474 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207739 26474 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207744 26474 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207757 26474 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207763 26474 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207769 26474 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207775 26474 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207780 26474 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207785 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:14:38.233819 master-0 kubenswrapper[26474]: W0223 13:14:38.207791 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207796 26474 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207801 26474 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207806 26474 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207811 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207818 26474 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207823 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207828 26474 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207833 26474 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207839 26474 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207845 26474 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.207851 26474 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: I0223 13:14:38.207870 26474 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: I0223 13:14:38.215755 26474 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: I0223 13:14:38.215799 26474 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 13:14:38.235377 master-0 kubenswrapper[26474]: W0223 13:14:38.215891 26474 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215898 26474 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215902 26474 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215906 26474 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215911 26474 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215916 26474 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215923 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215928 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215933 26474 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215939 26474 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215944 26474 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215950 26474 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215955 26474 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215960 26474 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215965 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.215972 26474 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.216008 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.216014 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.216019 26474 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:14:38.248871 master-0 kubenswrapper[26474]: W0223 13:14:38.216024 26474 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216029 26474 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216033 26474 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216038 26474 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216043 26474 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216048 26474 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216053 26474 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216058 26474 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216063 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216067 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216072 26474 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216077 26474 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216083 26474 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216088 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216094 26474 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216100 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216105 26474 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216110 26474 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216115 26474 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216120 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:14:38.251517 master-0 kubenswrapper[26474]: W0223 13:14:38.216125 26474 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216130 26474 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216135 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216139 26474 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216144 26474 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216149 26474 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216153 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216158 26474 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216163 26474 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216168 26474 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216172 26474 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216177 26474 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216184 26474 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216189 26474 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216194 26474 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216199 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216203 26474 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216210 26474 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216216 26474 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:14:38.252428 master-0 kubenswrapper[26474]: W0223 13:14:38.216221 26474 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216226 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216231 26474 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216237 26474 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216242 26474 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216250 26474 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216255 26474 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216260 26474 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216265 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216269 26474 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216274 26474 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216278 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216285 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216290 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: I0223 13:14:38.216298 26474 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:14:38.254973 master-0 kubenswrapper[26474]: W0223 13:14:38.216707 26474 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216720 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216725 26474 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216729 26474 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216732 26474 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216736 26474 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216740 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216744 26474 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216748 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216752 26474 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216755 26474 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216760 26474 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216764 26474 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216767 26474 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216771 26474 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216775 26474 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216779 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216783 26474 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216787 26474 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216791 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:14:38.256025 master-0 kubenswrapper[26474]: W0223 13:14:38.216794 26474 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216798 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216802 26474 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216806 26474 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216810 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216813 26474 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216817 26474 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216820 26474 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216825 26474 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216830 26474 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216834 26474 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216838 26474 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216842 26474 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216847 26474 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216850 26474 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216854 26474 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216858 26474 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216861 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216866 26474 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:14:38.257208 master-0 kubenswrapper[26474]: W0223 13:14:38.216871 26474 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216875 26474 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216879 26474 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216882 26474 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216886 26474 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216890 26474 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216894 26474 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216897 26474 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216901 26474 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216904 26474 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216908 26474 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216912 26474 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216916 26474 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216921 26474 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216926 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216930 26474 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216934 26474 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216938 26474 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216942 26474 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:14:38.258228 master-0 kubenswrapper[26474]: W0223 13:14:38.216946 26474 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216951 26474 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216955 26474 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216960 26474 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216966 26474 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216970 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216974 26474 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216979 26474 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216983 26474 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216986 26474 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216990 26474 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216995 26474 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.216999 26474 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: W0223 13:14:38.217003 26474 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: I0223 13:14:38.217009 26474 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:14:38.259030 master-0 kubenswrapper[26474]: I0223 13:14:38.217197 26474 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.218804 26474 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.218883 26474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.219116 26474 server.go:997] "Starting client certificate rotation" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.219127 26474 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.219358 26474 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 06:29:43.129409807 +0000 UTC Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.219429 26474 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h15m4.909982921s for next certificate rotation Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.219919 26474 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.221656 26474 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.225243 26474 log.go:25] "Validated CRI v1 runtime API" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.230212 26474 log.go:25] "Validated CRI v1 image API" Feb 23 13:14:38.259670 master-0 kubenswrapper[26474]: I0223 13:14:38.231582 26474 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 13:14:38.267082 master-0 kubenswrapper[26474]: I0223 13:14:38.266986 26474 fs.go:135] Filesystem UUIDs: map[2d6160db-474a-49c3-9ea7-0693d391532e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 23 13:14:38.268000 master-0 kubenswrapper[26474]: I0223 13:14:38.267069 26474 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/019a9bcf24ce5ea8628fb0a222b64597a0b233bcb8a8eee4032689bd4a953ff1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/019a9bcf24ce5ea8628fb0a222b64597a0b233bcb8a8eee4032689bd4a953ff1/userdata/shm major:0 minor:428 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391/userdata/shm major:0 minor:185 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3/userdata/shm major:0 minor:1309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752/userdata/shm major:0 minor:617 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192/userdata/shm major:0 minor:819 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2306eee29ab96f13a7b6b9bf9f3a4b8c1be47a50f030b34cf5a3b0197274b3fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2306eee29ab96f13a7b6b9bf9f3a4b8c1be47a50f030b34cf5a3b0197274b3fb/userdata/shm major:0 minor:486 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180/userdata/shm major:0 minor:994 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7/userdata/shm major:0 minor:897 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/309d089bca6e7c97d1cbeac6a63a1ce937ecc0912c1d3b3166d1ba3db4f77535/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/309d089bca6e7c97d1cbeac6a63a1ce937ecc0912c1d3b3166d1ba3db4f77535/userdata/shm major:0 minor:641 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673/userdata/shm major:0 minor:1211 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073/userdata/shm major:0 minor:482 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a2d420757c83bca1045a3ec4516092ddbbd8abbf4f20e54b4c522c5d6328b82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a2d420757c83bca1045a3ec4516092ddbbd8abbf4f20e54b4c522c5d6328b82/userdata/shm major:0 minor:845 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d07b83dc456c1a725cd00216a0076881595c484156f383050f864fdf8f89296/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d07b83dc456c1a725cd00216a0076881595c484156f383050f864fdf8f89296/userdata/shm major:0 minor:1060 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14/userdata/shm major:0 minor:1212 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b/userdata/shm major:0 minor:658 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm major:0 minor:106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00/userdata/shm major:0 minor:835 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584/userdata/shm major:0 minor:436 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4/userdata/shm major:0 minor:832 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766/userdata/shm major:0 minor:800 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46/userdata/shm major:0 minor:600 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40/userdata/shm major:0 minor:326 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a5cd0e8536fcc54350ba490f0eb9ca59486f86834d7ae3d682b2a13eefc4e56/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a5cd0e8536fcc54350ba490f0eb9ca59486f86834d7ae3d682b2a13eefc4e56/userdata/shm major:0 minor:504 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6dc6e354bc34576e2c51f7938beb3db18ad7bf25caa74761e84175165545f5f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6dc6e354bc34576e2c51f7938beb3db18ad7bf25caa74761e84175165545f5f8/userdata/shm major:0 minor:815 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72/userdata/shm major:0 minor:1217 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6/userdata/shm major:0 minor:842 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3/userdata/shm major:0 minor:903 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b/userdata/shm major:0 minor:1013 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/968a9822c87f73e9559c28309a177baff6729af2cf700098ba1888ec0387b7bc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/968a9822c87f73e9559c28309a177baff6729af2cf700098ba1888ec0387b7bc/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba/userdata/shm major:0 minor:126 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3/userdata/shm major:0 minor:352 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532/userdata/shm major:0 minor:507 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9/userdata/shm major:0 minor:615 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df/userdata/shm major:0 minor:1271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7/userdata/shm major:0 minor:817 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b907614a9591efb37b88a7686e4a790de265f0304e777404050b8a95d8f70969/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b907614a9591efb37b88a7686e4a790de265f0304e777404050b8a95d8f70969/userdata/shm major:0 minor:512 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3/userdata/shm major:0 minor:625 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm major:0 minor:163 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574/userdata/shm major:0 minor:354 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27/userdata/shm major:0 minor:722 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80/userdata/shm major:0 minor:1187 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402/userdata/shm major:0 minor:508 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d/userdata/shm major:0 minor:1064 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6786dcf48d821a6321a52c765c39223e7ae469bc0400a1737f59d9fc5cdb110/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6786dcf48d821a6321a52c765c39223e7ae469bc0400a1737f59d9fc5cdb110/userdata/shm major:0 minor:447 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b/userdata/shm major:0 minor:377 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm major:0 minor:124 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm major:0 minor:154 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29/userdata/shm major:0 minor:837 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8887c7d6eee650b037c513d33ece3c0abae0325c7cfbd8aa521e15955d8540b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8887c7d6eee650b037c513d33ece3c0abae0325c7cfbd8aa521e15955d8540b/userdata/shm major:0 minor:836 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f17488314313adf8e5d4ca3b5623c6439e87e4c15c926ef56dd3963870bb1fef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f17488314313adf8e5d4ca3b5623c6439e87e4c15c926ef56dd3963870bb1fef/userdata/shm major:0 minor:505 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc/userdata/shm major:0 minor:1022 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f575cb15d53ccde2ef110c34dc5bda0d2dd2200d5c840f4afa64c209dc8f16aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f575cb15d53ccde2ef110c34dc5bda0d2dd2200d5c840f4afa64c209dc8f16aa/userdata/shm major:0 minor:366 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5a1d137318bed9fd566b8260909991bd75bf6152bc142fe74433e2215565edb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5a1d137318bed9fd566b8260909991bd75bf6152bc142fe74433e2215565edb/userdata/shm major:0 minor:793 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5aa73f30470446484267ee08c4016bd9826913f9a65531b7d70349b1291252e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5aa73f30470446484267ee08c4016bd9826913f9a65531b7d70349b1291252e/userdata/shm major:0 minor:1170 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428/userdata/shm major:0 minor:461 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcbd0dfcd13ca5f8a8db77172cb144a3166d04c3140529e3b2606f791e557f0c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcbd0dfcd13ca5f8a8db77172cb144a3166d04c3140529e3b2606f791e557f0c/userdata/shm major:0 minor:503 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15/userdata/shm major:0 minor:840 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fce1914660e945c88d472e8a5d86bf17798d1db67260addab80c44f005293735/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fce1914660e945c88d472e8a5d86bf17798d1db67260addab80c44f005293735/userdata/shm major:0 minor:1065 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~projected/kube-api-access-7cjfj:{mountpoint:/var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~projected/kube-api-access-7cjfj major:0 minor:1017 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1015 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~projected/kube-api-access-wjkkc:{mountpoint:/var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~projected/kube-api-access-wjkkc major:0 minor:803 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh:{mountpoint:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:497 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:495 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5:{mountpoint:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5 major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7:{mountpoint:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7 major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~projected/kube-api-access major:0 minor:481 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~secret/serving-cert major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc:{mountpoint:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~projected/kube-api-access-kc6cl:{mountpoint:/var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~projected/kube-api-access-kc6cl major:0 minor:608 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~secret/metrics-tls major:0 minor:605 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb:{mountpoint:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:496 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a5284f9-cbb7-400b-ab39-bfef60ec198b/volumes/kubernetes.io~projected/kube-api-access-j744d:{mountpoint:/var/lib/kubelet/pods/3a5284f9-cbb7-400b-ab39-bfef60ec198b/volumes/kubernetes.io~projected/kube-api-access-j744d major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~projected/kube-api-access-q78mm:{mountpoint:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~projected/kube-api-access-q78mm major:0 minor:1210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~projected/kube-api-access-hhq2x:{mountpoint:/var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~projected/kube-api-access-hhq2x major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh:{mountpoint:/var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54001c8e-cb57-47dc-8594-9daed4190bda/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/54001c8e-cb57-47dc-8594-9daed4190bda/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1052 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf major:0 minor:149 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~projected/kube-api-access-vg2gm:{mountpoint:/var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~projected/kube-api-access-vg2gm major:0 minor:902 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~secret/proxy-tls major:0 minor:901 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5793184d-de96-49ad-a060-0fa0cf278a9c/volumes/kubernetes.io~projected/kube-api-access-v9dcr:{mountpoint:/var/lib/kubelet/pods/5793184d-de96-49ad-a060-0fa0cf278a9c/volumes/kubernetes.io~projected/kube-api-access-v9dcr major:0 minor:341 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~projected/kube-api-access-wg9l8:{mountpoint:/var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~projected/kube-api-access-wg9l8 major:0 minor:985 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:947 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~projected/kube-api-access-p7b4r:{mountpoint:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~projected/kube-api-access-p7b4r major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~projected/kube-api-access-l8wvx:{mountpoint:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~projected/kube-api-access-l8wvx major:0 minor:1270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dc83a57-34c5-4c64-97d3-b6191ba690eb/volumes/kubernetes.io~projected/kube-api-access-b64s6:{mountpoint:/var/lib/kubelet/pods/6dc83a57-34c5-4c64-97d3-b6191ba690eb/volumes/kubernetes.io~projected/kube-api-access-b64s6 major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/ca-certs major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/kube-api-access-r6xw4:{mountpoint:/var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/kube-api-access-r6xw4 major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7:{mountpoint:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7 major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~projected/kube-api-access-7l66s:{mountpoint:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~projected/kube-api-access-7l66s major:0 minor:1058 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/default-certificate major:0 minor:1056 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1048 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/stats-auth major:0 minor:1057 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~projected/kube-api-access-mfxjf:{mountpoint:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~projected/kube-api-access-mfxjf major:0 minor:1169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:1167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/webhook-cert major:0 minor:1168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~projected/kube-api-access-cksnd:{mountpoint:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~projected/kube-api-access-cksnd major:0 minor:636 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/encryption-config major:0 minor:634 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/etcd-client major:0 minor:635 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/serving-cert major:0 minor:633 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~projected/kube-api-access-ght2z:{mountpoint:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~projected/kube-api-access-ght2z major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:826 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/srv-cert major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~projected/kube-api-access-znzzv:{mountpoint:/var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~projected/kube-api-access-znzzv major:0 minor:797 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg:{mountpoint:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:602 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/tmp major:0 minor:603 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~projected/kube-api-access-d6d4r:{mountpoint:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~projected/kube-api-access-d6d4r major:0 minor:604 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl:{mountpoint:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:498 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~projected/kube-api-access-znjcw:{mountpoint:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~projected/kube-api-access-znjcw major:0 minor:831 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/srv-cert major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a544f5a-06b6-4297-a845-d81e9ab9ece7/volumes/kubernetes.io~projected/kube-api-access-t5zks:{mountpoint:/var/lib/kubelet/pods/8a544f5a-06b6-4297-a845-d81e9ab9ece7/volumes/kubernetes.io~projected/kube-api-access-t5zks major:0 minor:344 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3/volumes/kubernetes.io~projected/kube-api-access-wfl9v:{mountpoint:/var/lib/kubelet/pods/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3/volumes/kubernetes.io~projected/kube-api-access-wfl9v major:0 minor:667 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~projected/kube-api-access-mhbhv:{mountpoint:/var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~projected/kube-api-access-mhbhv major:0 minor:794 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~secret/serving-cert major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~projected/kube-api-access-sl5r2:{mountpoint:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~projected/kube-api-access-sl5r2 major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/encryption-config major:0 minor:500 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/etcd-client major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/serving-cert major:0 minor:502 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg:{mountpoint:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:499 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~projected/kube-api-access-wbk8g:{mountpoint:/var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~projected/kube-api-access-wbk8g major:0 minor:896 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:875 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v:{mountpoint:/var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4:{mountpoint:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4 major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9bed6748-374e-4d8 Feb 23 13:14:38.268498 master-0 kubenswrapper[26474]: a-92a0-36d7d735d6b7/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:501 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0e3072-a35c-4404-891c-f31fafd0b4b1/volumes/kubernetes.io~projected/kube-api-access-rmcjv:{mountpoint:/var/lib/kubelet/pods/9e0e3072-a35c-4404-891c-f31fafd0b4b1/volumes/kubernetes.io~projected/kube-api-access-rmcjv major:0 minor:792 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~projected/kube-api-access-22p85:{mountpoint:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~projected/kube-api-access-22p85 major:0 minor:1209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~projected/kube-api-access-nn9mt:{mountpoint:/var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~projected/kube-api-access-nn9mt major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~secret/proxy-tls major:0 minor:809 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~projected/kube-api-access-n8c76:{mountpoint:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~projected/kube-api-access-n8c76 major:0 minor:1208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~projected/kube-api-access-5z8xh:{mountpoint:/var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~projected/kube-api-access-5z8xh major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:791 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~projected/kube-api-access-zx8dp:{mountpoint:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~projected/kube-api-access-zx8dp major:0 minor:1178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1182 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b12352eb-04d7-4419-b1bf-d08bca9da599/volumes/kubernetes.io~projected/kube-api-access-cpnzd:{mountpoint:/var/lib/kubelet/pods/b12352eb-04d7-4419-b1bf-d08bca9da599/volumes/kubernetes.io~projected/kube-api-access-cpnzd major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg:{mountpoint:/var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj:{mountpoint:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~projected/kube-api-access-llgnr:{mountpoint:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~projected/kube-api-access-llgnr major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/certs major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~projected/kube-api-access-bdbct:{mountpoint:/var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~projected/kube-api-access-bdbct major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~secret/cert major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~projected/kube-api-access-6sh26:{mountpoint:/var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~projected/kube-api-access-6sh26 major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~secret/serving-cert major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs:{mountpoint:/var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc:{mountpoint:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~projected/kube-api-access-ndf8h:{mountpoint:/var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~projected/kube-api-access-ndf8h major:0 minor:1308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~projected/kube-api-access-mq2rn:{mountpoint:/var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~projected/kube-api-access-mq2rn major:0 minor:588 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~secret/serving-cert major:0 minor:564 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw:{mountpoint:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~projected/kube-api-access-nvg7b:{mountpoint:/var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~projected/kube-api-access-nvg7b major:0 minor:353 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:758 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx:{mountpoint:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:58 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj:{mountpoint:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~secret/metrics-certs major:0 minor:720 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e96ce488-0099-43de-9933-425b7c981055/volumes/kubernetes.io~projected/kube-api-access-7xp47:{mountpoint:/var/lib/kubelet/pods/e96ce488-0099-43de-9933-425b7c981055/volumes/kubernetes.io~projected/kube-api-access-7xp47 major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~projected/kube-api-access-4b825:{mountpoint:/var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~projected/kube-api-access-4b825 major:0 minor:419 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~secret/signing-key major:0 minor:416 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82:{mountpoint:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9:{mountpoint:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9 major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~projected/kube-api-access-4mkd2:{mountpoint:/var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~projected/kube-api-access-4mkd2 major:0 minor:596 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~secret/serving-cert major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f81886b9-fcd3-4666-b550-0688072210f7/volumes/kubernetes.io~projected/kube-api-access-tmrjc:{mountpoint:/var/lib/kubelet/pods/f81886b9-fcd3-4666-b550-0688072210f7/volumes/kubernetes.io~projected/kube-api-access-tmrjc major:0 minor:321 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/ca-certs major:0 minor:446 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/kube-api-access-jftvv:{mountpoint:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/kube-api-access-jftvv major:0 minor:449 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:460 fsType:tmpfs blockSize:0} overlay_0-1000:{mountpoint:/var/lib/containers/storage/overlay/0c6aca08751930b6ff3d9fa95b6ddc1a23d6d24e8d99834354afc2e498d85b65/merged major:0 minor:1000 fsType:overlay blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/138dcf4eb63e0a6552d81450da483e724029f9fbe705e41d9bbd9bc636c6ec97/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1007:{mountpoint:/var/lib/containers/storage/overlay/cd7a819a10332993cfd2acb870ac9133a58d44566ffe195189c6152405e40bec/merged major:0 minor:1007 fsType:overlay blockSize:0} overlay_0-1010:{mountpoint:/var/lib/containers/storage/overlay/f34ec920eb96078bd7ed241b20be5d953c45f173a610dace11b17c7e76c6d2d2/merged major:0 minor:1010 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/c974be75074fde6214cf6251fd312f5d4978305be262aae153f937bde2a82bf4/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/3e5f68f6409870d8330722aad7a29f069c08864cbeb51d65485c4d6b3a2c61f5/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/2fecdadf6a4e18e23d9a51024af916c7f3cf36e790a41f8a8be25352780c9d46/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/d00c14d2451f65677be669686fef8d3e63b1c57120c708b7e11066223c0e6b13/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-1029:{mountpoint:/var/lib/containers/storage/overlay/59a8fff205f336e7b9ef31c81eceecdc35672351b006e8910ed04367fb90dcf7/merged major:0 minor:1029 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/98fa7842d162bde2bf7f97027553b8ec501c618ebc7ff4736261712ff483681e/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1035:{mountpoint:/var/lib/containers/storage/overlay/5c1c1cd61a3293b92578aefeeb0edbd7d6172f4228044040fb41e43a8730dadc/merged major:0 minor:1035 fsType:overlay blockSize:0} overlay_0-1039:{mountpoint:/var/lib/containers/storage/overlay/d61e4b2f91fcbbee4905dc1e68764982367f3b80f8bcf8ea2987be932fbce38e/merged major:0 minor:1039 fsType:overlay blockSize:0} overlay_0-1041:{mountpoint:/var/lib/containers/storage/overlay/770330c566e298797ebe94a233edc71004cb1c966fc82597cfab6c22bb62ac26/merged major:0 minor:1041 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/b966e925d5c71ac0ab9e92fe2614895e60cdfdef0209b7e757528c07a2862f61/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1068:{mountpoint:/var/lib/containers/storage/overlay/304a9428a67ddc68148080c62e08e46b023d4ba2208cfb70fc9037c3cc2a7226/merged major:0 minor:1068 fsType:overlay blockSize:0} overlay_0-1070:{mountpoint:/var/lib/containers/storage/overlay/fb5431a49ebea21afd87021f35faf8cb645fc3ffaf56a32a97799c866b53c8c3/merged major:0 minor:1070 fsType:overlay blockSize:0} overlay_0-1072:{mountpoint:/var/lib/containers/storage/overlay/d04a44170fc7ab5d34ba3048a1479046c3d020c8f7b9825d09bfd7e282f41631/merged major:0 minor:1072 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/4a254c5a8bc856e2618b7c4a405140eb4ea1acbb77291e87175c34f05bd44fee/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/9bd0f1ea3b97f81e084bb7382509a6da6b211ae96623a4e6e8cbef77470b060c/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/03045cc8d300168aabaa28f88c8a0677c768e23deaa732b6d7bb8264ed1ee44c/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-1098:{mountpoint:/var/lib/containers/storage/overlay/9a887389d1836400cb7d6dc543a43f2cf813d4173a7c93dbcdac829beb19cac6/merged major:0 minor:1098 fsType:overlay blockSize:0} overlay_0-1101:{mountpoint:/var/lib/containers/storage/overlay/4608b14a2a0d95d0e7e50918e405fa6b7b84e0fa924301314bfd81c5a1874ef5/merged major:0 minor:1101 fsType:overlay blockSize:0} overlay_0-1103:{mountpoint:/var/lib/containers/storage/overlay/659185d9622dd72077c3d50829323feed5b9637b56c7bc156cecc5a01988d0a7/merged major:0 minor:1103 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/d65fbd6235ee7398bd9bd80325890e6d9db9042867ebd34eb1995bded03f724a/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1139:{mountpoint:/var/lib/containers/storage/overlay/127de730bb15528cfc50926134588d1dfeec0c281b7966ba34b81c201f070ba9/merged major:0 minor:1139 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/100044aa2d6e8e95aa40d15bafe14fe3fa98658b200942150f904d1b413dbef4/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-1157:{mountpoint:/var/lib/containers/storage/overlay/ae9095c41bec1fc23869496dedaa5719616e471b6a813ecb568744b22381c2e5/merged major:0 minor:1157 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/898d34fb874fb3a9b0718a0e075fb3f716dfa3f72b2c2e23c733e0981c25e406/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1165:{mountpoint:/var/lib/containers/storage/overlay/d6eb071b1f8c7581731856967a75482f33d9eb705bac49c2f5cceeed20ef5153/merged major:0 minor:1165 fsType:overlay blockSize:0} overlay_0-1172:{mountpoint:/var/lib/containers/storage/overlay/5f257ff7ca54651a3ee002ed326528a4a3b03a26642554d3aaf2a58492402c6e/merged major:0 minor:1172 fsType:overlay blockSize:0} overlay_0-1176:{mountpoint:/var/lib/containers/storage/overlay/b7aad18882b875558931ea4359caac9d2b6f3115e6204dcc35b65f3cc0ec7dfd/merged major:0 minor:1176 fsType:overlay blockSize:0} overlay_0-1180:{mountpoint:/var/lib/containers/storage/overlay/0973c1b647a2feb54401ed8380a9a953b27504c811acf1ec18568bde9bff2347/merged major:0 minor:1180 fsType:overlay blockSize:0} overlay_0-1189:{mountpoint:/var/lib/containers/storage/overlay/a278a92915624b4827448c44c9b176bad7d8a961c7c96934dffe3962770d8619/merged major:0 minor:1189 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/dd84327b11a376d8b1ac9066339a188073fc73b3dc7b49c6435b86a7121f1872/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/e85f4400eedbd6c553d1cd3b85b0f00c0d895c2ee19693d7e69c17ef27ba5ee6/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1215:{mountpoint:/var/lib/containers/storage/overlay/b3397d1de4972de004edacb7677da35c8d69c091ad3ab72e760496bc03e4ba2a/merged major:0 minor:1215 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/a5154dd0cf1c078e23c836501a87c6c60e31b48490904d1c8e6cd47fe225f72d/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/5411f4cad8b0aa785e9660f69a66a74abd4d7529d29a3824452ab812f6ca2cb5/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-1221:{mountpoint:/var/lib/containers/storage/overlay/83f1ec388ca7fe146250527788d19d9257241868a404268e3bebeadf73336bd5/merged major:0 minor:1221 fsType:overlay blockSize:0} overlay_0-1223:{mountpoint:/var/lib/containers/storage/overlay/411be6c792154a04bb53aa0a786ca9390c417e4c548efecd2c3601c40f140038/merged major:0 minor:1223 fsType:overlay blockSize:0} overlay_0-1224:{mountpoint:/var/lib/containers/storage/overlay/1fff978b43ba3e6ab504937141804e69ef9acd7162ab4e2fb3d557a999981ef1/merged major:0 minor:1224 fsType:overlay blockSize:0} overlay_0-1231:{mountpoint:/var/lib/containers/storage/overlay/d5517e1276d3bd8ad8c20e471b67a6e9e43d3956d8024646c0ebcf79b2a9173d/merged major:0 minor:1231 fsType:overlay blockSize:0} overlay_0-1236:{mountpoint:/var/lib/containers/storage/overlay/f754c66813a19785f1e943f636ef57b494ce1219e0143774c21f95841f2fc2e8/merged major:0 minor:1236 fsType:overlay blockSize:0} overlay_0-1238:{mountpoint:/var/lib/containers/storage/overlay/5da7d55a9b9fdf724a466e3c594124e075d1219d291a7cd91216f4f2d4a8c3be/merged major:0 minor:1238 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/c645484f8150c406ca5bcd66eed58543ccb9a2dd530896a96900330c25c445f2/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-1242:{mountpoint:/var/lib/containers/storage/overlay/ef62ee9c80c011f1568251b3ed00db1554fa480ee9600016a50f27b54cb0e924/merged major:0 minor:1242 fsType:overlay blockSize:0} overlay_0-1245:{mountpoint:/var/lib/containers/storage/overlay/2f463259a1c38b7d863c995a7a809f43642031baef92f56a6f726a8c3249a71c/merged major:0 minor:1245 fsType:overlay blockSize:0} overlay_0-1256:{mountpoint:/var/lib/containers/storage/overlay/047d8d9961f2ba8c09d204d68a24bbb85b03a530e509e593c106af2c22decf2b/merged major:0 minor:1256 fsType:overlay blockSize:0} overlay_0-1273:{mountpoint:/var/lib/containers/storage/overlay/74157eb711d96e3a0c67512f7c09d0e9852ea6852031d5e2d28199bf067bf592/merged major:0 minor:1273 fsType:overlay blockSize:0} overlay_0-1275:{mountpoint:/var/lib/containers/storage/overlay/579af67d9fa13325086bca7e292f84d0d5e40fef598193f2f924179bf713e3c4/merged major:0 minor:1275 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/a9f4a5255a3259800c181eff442f6acd994a76e3aa4f615947d7459b16378b65/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-1292:{mountpoint:/var/lib/containers/storage/overlay/78d8105245f31336ba19ac1eb21f985d11bdc1f932222e5070f513cd55669115/merged major:0 minor:1292 fsType:overlay blockSize:0} overlay_0-1300:{mountpoint:/var/lib/containers/storage/overlay/0b21c46a9002b094d64d63ba5552535d295dab4f93f6e65d52ebdfab49434108/merged major:0 minor:1300 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/c7656afe1b20e18d5288902d38f839dd909a524e24203d2ee54de707707cb2a6/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1312:{mountpoint:/var/lib/containers/storage/overlay/7b17dff663de2b990f1ef0e5e387ec892fd9f959a2a7284d963dc519cd76f2fc/merged major:0 minor:1312 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/3a4ad25d55be29f49775856000e9b90bd9f86cd98f412abde72f1f5e4fe0699d/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-1330:{mountpoint:/var/lib/containers/storage/overlay/551f95e06c8453473c861ba43a41586a042001af002d0bd090bc726bba2e81d4/merged major:0 minor:1330 fsType:overlay blockSize:0} overlay_0-135:{mountpoint:/var/lib/containers/storage/overlay/ac424a0c3a7b645ac4cf26f4a7240202ca648fb839afda7312f214efe16fd9f8/merged major:0 minor:135 fsType:overlay blockSize:0} overlay_0-141:{mountpoint:/var/lib/containers/storage/overlay/f83964a7a0817a72dc36fa7428976bb66d625237e0dcde547b1da37e237c5852/merged major:0 minor:141 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/e831bd5837c26433c63cec3d865618cc90ab47de6c44e0873a8c1e49f6413930/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/fcbab3a1ad1870856784c1474c4be9f1cef47919d42e722e3252be750ee2245d/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/284fde59459793658254765a86793f8df3abacd1e864699d3fe7577b97f24eb6/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/53bbb5487bb7f2fd5e2d92844533bd1ba0d4c564e1408e8530368151157c7ebe/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/15bb74efab882f47c2a833c93377ed88cb22cf15d99092be7a07123d752aa043/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/7e7023da0419b1d3fbf63a8350d877d5a2f471b88c6761ed94d871e308e469b0/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/3ce9f9af3400d5278700758e11fc9188b7d0b7fd961f6e076db9cfb05842b61f/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-173:{mountpoint:/var/lib/containers/storage/overlay/210d558c4553ef4faf677cb96e0bce65400cb7970c3fabc2d8b4199af61c1106/merged major:0 minor:173 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/bd6d6b7a95d790c62307f4417f238e6d748c4dae49ec135d49414a0aec7d5a27/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-175:{mountpoint:/var/lib/containers/storage/overlay/bd4ed6a73b982df2f1cd7ac9fa2e3106f50f0f1b612a634cd8e428bf9307649f/merged major:0 minor:175 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/46fbafc27cda37c1d6121fc5070b8497ca0d561c0ee3544945404e692bd2babf/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/1dc296e501556ba72c419e2e967afe734057055c37bfbd667c068e397618d8f8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/166286a56608598546dd892fdcfd6c90d941d0a1a26bf3fd23476af155393f26/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/48c8d8e66adaf1f7bdcf9d698d5bb4d11a05e9403601d708e9d60e33fede3f7a/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/d79b53de133a90950d7696d433543d6f447bd71d307d203bb80b6ea91bb253ec/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/e7e7c74684e0a0f169663dcc21309bff70d71d0847a24a822fbc3d5424405c80/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/635c2d42f5f022497c3a3b72b39398ee3ebc58c504dc9a054610928679bd91de/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/787019c6ce4792917114ed08730652ba0a3d59f2921aaf4905890676e51139e6/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/ce411febe08e408b7c2cc8f963cf4922baab54d2b4d66b51d116999fe3ad4f7e/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-212:{mountpoint:/var/lib/containers/storage/overlay/dff4d90bb71782654696862704a0ee6d348fddb035957b9fe5aec86fd4abf6de/merged major:0 minor:212 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/908cebb2def07eed91c0bf4a726ccdea02adc6d09b8504b1550afb9c3ef97f4d/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/4bc14fb5e5910fe47c4cb7aea0d8da81175ec081040a95db9bed3f176d4c3503/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/29a25f0cb6160f73a51fe4c4ccc44c13d78d2569e7a0da3eee462b1dbdd535ce/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/a6ee5868db62bee232d8d0bec021c7b2830102df821f76d79603e3053f83ae84/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/9c95e6d6a6052a306855957a860f5323e463a0ff027618961abc8a873731ad7e/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/2d7f02e85ce5200f29ce23e0e7d3e81d1477b98eca971f7d9b7215578afa6756/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/9c897047166fb93121aa760ca43ac54762bffebbad52c5e30749a3434f47a18f/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/64514c23c1c96d40494750021b1c98a164bfda23e7aa698d91257a7fb4936009/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/085e9a5ba152eedc0c0b15809f81a44ac99acc2091b58b7d884c80b145af536c/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/1bd158c627df8ffcf9db0eb116ef9dee2ea0f0f87d877cf2961cf2721dea4f9c/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/da744e864a0a961ce6a1e0fae17ebf24ed7fef629cf40f55b8e990f2e677d441/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/2281c9447135ac7d9d49cfcb6a02b61d986af3a41e676148b3325ede9027e8ad/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/6b3a831e0eb23f9df1dfdbb56e9e8fb817d1ec3f88770d10f6c96c11915c92c0/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/49b57343b2ad4021a48c3f5edd3b9687654eedb46d7a27f5f0fcda1d48500531/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/84c934eacf9a93ddcdf0b8db55fb9c9c5befd0222bc6688df6ed152073341323/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/57c829ea56051513ee51c8c20e7501148bc17c7448fb60b382fe601a98b1a6a0/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/3efad935cdfe4928d8e0d1fc59cb8a84606208015b91f3c4aec35b104bf4fa96/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/1dfa8888fbc2bab04cc6b0b182850f954ae7cd12c748c31aef48576e4f99d309/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/ea3ee416c99ebdd8e8640439f045bbfb54a63262bf28b27137ae9047fd9040ce/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/097a45a568e66530f280a1f190e8e321bc36f7ace2ce8401286a6a6b9bead8fc/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/5cb181fb3334bea92c3b9262972192cdfa466cf9395c3d8d2611acb109455111/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/b52b86db10d95f4831135552f40135a10c18d5fa91e38eda199f24d4e38db9ac/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/95774ae00d42f5d24db0f639eeaab334290bc3edb7326f604bb67b248be8f420/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/d75055c7dfcfe8fc5e0093bdb7180ebe5ba578a826f5a35a7cff0be8b393bd92/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/aaa5488dd0410b698ef3ba5c3688321bc450eacac820a1c2bbed26cca01ce9d7/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/c969ee44409914134f471ba66b2893034ec5acb11f33922616ef079562c73baf/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/f2bae69d231c9a1b1425b5f7b38fae43f470dd4ccde3cc7a5f9cc8e01b3fb33b/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/5eee30fbe46b096f0144baaca6fa2f28ff98d7097365bb8ecad76c158217b95c/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/f110d75d14037fe79127c94f3c6258667e88848a05b1b31eb3ffb791785fc91b/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/64905d4935115130a439fc9e1e4aa38c72098c2f5fbd471eb9ee2c6666810e21/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/96b99bf2da199fac82166f4b181490bfca95ec9ead16ffdffbc737b9f7f5afbe/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/35e9257e3f6426bcce786f1d794b25f49f2be899ea130b371181e7ba5d820d3d/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/7a5fc46243f3db9bb4c5ac8322308efa59ca2c153f015ba8d02f115c4d4de7cb/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/68de91a79158b23968a801b098db45e8a6dc78f1fe721c5b4cc68c89add1b870/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/7a893277bc82f5ff7a8114d6a56ab8fb00ad14f43ce479403f6cb157e79ca2a6/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/9128fd09dea8c351a778228d5dbcc9d8738eaff771db39d0fb01c391e4d97b09/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/c80e4e6f1f4c5c8ec98064b4efa1b5c01ffc53fbfbaaa29b0a08b7af9f82fcb5/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/52f0015de3455a867cb77ffea93ab5032f19ebf2bb1a25fbc0eac1f9d034a236/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/4afd2e538ffd1bb6f812047d0b753d46e92c97eeb8822dbc94dcdeeb61717fc5/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/a8650f8f1bda410b9cfedbdb9416f7b8d58e75bc2cbff1624b93236f313e667c/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/e3decce6756091cdecd7baede5c3f45fe7a327b612a1fd1aa1ac70d0fa567ccf/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/09945b766ecb66062acc831c34dcbe6725eaf5f29ce426a2ea4ab87ddf83268b/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/ce5f5a53e4c3c306ee456a4d66717de5592265bc137f3df1948512c6631b538b/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/d3e42b23e73696c7dcd3d0db3a58724ac645d4f53e6b3c29a82e7a92da408cdb/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/78a3e798723328d1fdd23833b87745aa64124a8dae1ef13de8fa5d966a9dfdd4/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/f7fae17a88c70740e86366bdced249474258993eaf868bc82a4888dbbcc44296/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/77d0492cf302c457615eb6df674cc29bb88192955e996644ce67c5ac0c2be5a1/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/2a80fc0d159f2037031917aafdf5b6e98b49ea287421c36ca0aaf7d6353f672a/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/083d36f747ee58cf17447a7cecc322e58641d1a906361de08167e931ee1e6dbb/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/6b56a32163ed43e2b05f42602a3cbd3a918e916d3aac75807d15c458b9b08746/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/19b6ccb0469115bac8e7ad7db466d67c862afdc1149c3dacb84ebcc9500e95a8/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-433:{mountpoint:/var/lib/containers/storage/overlay/ddce18bc3bc7eca66ba20bd224296e194bb6c7bf618c1d0a44eede0ab2fc4e05/merged major:0 minor:433 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/41b7e06db1d0617e04b8436c2b40d5f11662c8e13d1c43ea1345b8b23e5309dd/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/c331463565559a88032c608e08121c02f88988da4276a254ae617de4bd2f869a/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/a9f1aa003dcb6acef9356ebcdfdc511a6e3b68c71fa363f810172c26e0f9a2ed/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/c8a61b0b4ad8f3b64919435f4f2cafce97ad3ba57c8f8f9378e757e1e51e6181/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/de989f7a89f74bdba581e94117fbf66ac88ab7072480134140ab81f6061bb94f/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/d1cd3c527f810ed991704e9dcbfb896275acaab2c0b46ef6be4b0de69a27e411/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/61b5357351cfc0d4d0b58b97ca016c3d68d314ec610651b38046dcde60b56d95/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/9aabff7e3d11884af6aaf994785f8b7a736bc842ba66704877ecb3d0698102a0/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/5df7e0a9019133539d832cd42dbd286133e14072e3fbb77f54d7285ab61cc97b/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/52c6cd86f8cd1cf6ad17367f53e9530a224f3f516908e750936a217741eabb61/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/8fc4bae833e4983b45d9ef8504a87f57496028ec3d9dd1e8ee3e8278dba04e08/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-468:{mountpoint:/var/lib/containers/storage/overlay/3a16bfc0287312da4c1b4ebc613f9aabd0a27c61060b504570389b663cdb7b2d/merged major:0 minor:468 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/92dcbe9ea801ab6af2efb3c37069770b32d908106510a3762dedc28aaa8210d5/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/c5d42c0da2960218592caaded2a6a7878503350c93a2ea9514859f0c685aa53b/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-479:{mountpoint:/var/lib/containers/storage/overlay/e0eed0c2ab489061a1d6367e029fa0dc4ad1115e21a6c1cccf31c7b3bc4e8f00/merged major:0 minor:479 fsType:overlay blockSize:0} overlay_0-480:{mountpoint:/var/lib/containers/storage/overlay/24cc3d433effc140b12fbe9ba5975652477241fcdcc7c55866ca300f85bfbc66/merged major:0 minor:480 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/5242393e716639f85057c855a5d81664c6d347a67888338cbd7cf611ab530007/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-491:{mountpoint:/var/lib/containers/storage/overlay/958b30d7e7d422b6ef753fb59ac252d8095ad466fa5584278d9166fd68eb266f/merged major:0 minor:491 fsType:overlay blockSize:0} overlay_0-493:{mountpoint:/var/lib/containers/storage/overlay/8c529f678c5d9f3f2e60540cac07947658b89328e825a87e03c09f5fa61f8931/merged major:0 minor:493 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/2b34e5d7647b86defbbc5fbeba3a6bd3bad130b5f6569b5de309d1755e8dd71f/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/2f0069138d726df1eed0cbca81e4c0e25fccd5c9de668b49469b08788c1f94be/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/85398e6544ddd48c578abee0a7ec245cdf67daadc6eab05661e47aabe4bd2a77/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-519:{mountpoint:/var/lib/containers/storage/overlay/392453c665e8ab7b5e8ea45c0cb902c9e140f14d080446402b9ea0a847324952/merged major:0 minor:519 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/8cc983fd5684efd526ba56b3f9cdc0d3ec7bc30fe17652295eca9bd1abfdf675/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/306f48db292f18e403319421306eda9529f35bd462a57d0ddeb930b64ed7341b/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/d54fdf087103b2fedaa40894054a7f87460e1fd367d42f8fbcef6036e4bec8a7/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/5e2e05e8e2dcd594d47ec2478bc294f95a715f2637cbf7ed193da5d251b2c940/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/3c11d3ecd088866e2061d84a0f8f4ae1da54d97f063f5400d9c04226d3aa8e92/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/8e9697f8f3b9a5dd322afd793292e72dd8b1d25fc8e0c335bf53ab320d20bc2a/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-536:{mountpoint:/var/lib/containers/storage/overlay/94cccb29029223280c291e26786c506830e917455f97411f87a60863dffecbcb/merged major:0 minor:536 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/6598c71dac5397f330189cebb0ca83f91266cd9e93fd72007d69bf1295899e70/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/3421a6f92fb9ac30ca5c4b98fd85248caea878912afff150fe33ce49b7102a14/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storage/overlay/6fe1d908d604ac13667a4f28cb93d0cca798ce3317d87f03fc348ce001b9ab7b/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/7a65e6817286d80bf49deadb0ac7e1291705d2b3a2c7d11eb471cd2dd9ac6feb/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-543:{mountpoint:/var/lib/containers/storage/overlay/0ca3d1dee0f5c0a87bf34f0886c8198ddc02bb0eb2221e2cc9c02428481ed18a/merged major:0 minor:543 fsType:overlay blockSize:0} overlay_0-546:{mountpoint:/var/lib/containers/storage/overlay/35aa0a00955cc984f8dd5d7eff3205f55edfe29b6eeb460be491fc21591d3a09/merged major:0 minor:546 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/83491d3e265304b967fd46d6720aab461d4a2aa158e97b09b87392d1449e8e6e/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/c64dbbaadf6e5ef9b63714e7682d2b9b0253a3b93b49526daa7762255beb1cff/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/ca80dc470cb5f215be9f1987108882a9460a3b104eadd0531974fe62ee283fbb/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/469bd2e22d27f9bdd696aafac580b16f04793d6fa13f2e6b14ba9d4dd599654c/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-560:{mountpoint:/var/lib/containers/storage/overlay/8e0dbdbc7e35db6353419c443c374062c55f65bb528573c6f12b3a8b89b6e219/merged major:0 minor:560 fsType:overlay blockSize:0} overlay_0-563:{mountpoint:/var/lib/containers/storage/overlay/ab9cb21fcea3842bbd873605dc3b2bf6a7eb3a6143b30b24840e2035bdb14c39/merged major:0 minor:563 fsType:overlay blockSize:0} overlay_0-569:{mountpoint:/var/lib/containers/storage/overlay/937abf2dbccb4107eb1874be76540e10993d1ce69bd578156e817531beacb59c/merged major:0 minor:569 fsType:overlay blockSize:0} overlay_0-571:{mountpoint:/var/lib/containers/storage/overlay/419ec4f067af6991692493c8a5e3bdc5b4a26bf37440795c5f9fb58e2582c826/merged major:0 minor:571 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/6e912f26d634ffcb3f49ffc8eec39ed2583f605eb3ffd709aa9380a23ec47fb6/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/39628a189edfbdc63e6023e57d8c8f59a8129f2f3709adf42992d669ffe8ec05/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/457e3ad3798b758a674f0e2886e0b47b40887ac06af70cc75e1dc3af48cd039b/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/d992602a6599d08d2b07a1b3c8cb2a103c2d0917b1a2070ecef03b099280867f/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-59:{mountpoint:/var/lib/containers/storage/overlay/5e3c90d2b660cfafb65a1ffa1d17f820ec93e9cbdda11bec92de556dce514959/merged major:0 minor:59 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/68a0009503b4e6dc83542834609a624dfd689f1a7d0b93ea6ac21958217af56c/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/3d84d5226f610bf0256e9e5c40410419df7e80f1bb1208f5a22ea73d33cd996f/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-609:{mountpoint:/var/lib/containers/storage/overlay/f3a1ad815934c44c42dd0815252acd823de55e7cc4e3a49d211d6ba99ee4c390/merged major:0 minor:609 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/a24f7e949f55935160116212ae204534d737e4c911e26d38b285c6a4c3fe9478/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/6dd68dd7aaf9cea8ca5eae20e7dbac484c3c8c0e471322ddb439f8efb0efb8af/merged major:0 minor:62 fsType:overl Feb 23 13:14:38.268921 master-0 kubenswrapper[26474]: ay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/163abd482222373dc0ae9453bc60e0ddb190d2be11aaf56bcfb6250fa4398b39/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/074cd8e14e9b51674b18565ff1ac4094ebff43c6a1942e8cd6ecb8715fd059fc/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-630:{mountpoint:/var/lib/containers/storage/overlay/69fce27599d5e5bab6d9147225d80bbe95bc502195edbe995c1853ea9fa6d590/merged major:0 minor:630 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/3a047e6676f4d0d42e1e862bac87d4e5e9e024347fb2e744c266e8dd1fdab5c7/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/901483b54398c1aaea5c779b3d5959e29406c90020ef90351528ac1793f794fa/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/c32168af668facc27189e8e0b7522ad1541f5298488ecb611fd8637a2c3f2727/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-646:{mountpoint:/var/lib/containers/storage/overlay/af7eae74d3c4fe1e29be1c1efd377a73513cc8d26291eb8ea18c5e940ed6cfd2/merged major:0 minor:646 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/63a07ad595eef7b3dfd399978bd3105c8e0ea7f06d0653b0016321da56af1ca3/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/67dd87ddc643aab251545122876d2fffa6dc44e195157e9595e7e6ed7fc5e58e/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/bce3e8453af2d1ec95829d5c0cca9c7f13d779c4b7114a1c3d79db70aad527fc/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/701bce3c7b67d0c097787a0f77cb24807dcb29fd5846c9f5374cf8f8f8bef09c/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/cc14fcbf5ff3b935aef3f8cdb55d8f148da4af4efeba4ed722f13280e67b14f5/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/b255003a5612024114a11c378a70058e4617d3535ec19ec63ca02f7c07a2c696/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-668:{mountpoint:/var/lib/containers/storage/overlay/18a472cd84d21ebc0b5168e26109fcd442da30bee794e945a314b912db409307/merged major:0 minor:668 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/c38f1e78852acf78b161b4e5fd7f79e4d9103f102ae9966efaf27a394e860f04/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-678:{mountpoint:/var/lib/containers/storage/overlay/48665996f468411c41df9d84cd8211fcdbec84b5fff0119c363a10b15e62b131/merged major:0 minor:678 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/3a83d1f0a69c42e689a8a954c91fd8260412ab63b277830c0d8628957e1f6fd2/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/8889c35579784a283a7da7169d37162701e6425aa45039ec5945dc2f871fbdb8/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/67ae0f9c79e009dbb2038372500916c2c738cef977eaa2d91896184e938e3d87/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-691:{mountpoint:/var/lib/containers/storage/overlay/b23b8fad02b9b10bf646f587f44692be266fe4218e22b8e52c8a00d9819e1b88/merged major:0 minor:691 fsType:overlay blockSize:0} overlay_0-699:{mountpoint:/var/lib/containers/storage/overlay/d18291aeea5010e47066580b10869840335fa7e215fbafffa7f649c4af1eaa1b/merged major:0 minor:699 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/3faf8fff441b75a04f1f196b14d2cb79d0a28015c9a70ce605b66eb143726563/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-707:{mountpoint:/var/lib/containers/storage/overlay/d60955041e32c64483ef5688be9cb72e71b6f4f8bb38ad39c3f4d329369ecc58/merged major:0 minor:707 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/a2ac25929e02237f5baebc6667631caaa161b6c2f73f57cd68456b799e64b9a8/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/55fd19ff58feb73b92f2daa79c3a7dbd53846d2b15ee2f532c4500d37d4600ee/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-723:{mountpoint:/var/lib/containers/storage/overlay/2c63f28a1d4f63cd1e6abd4fce22ca979beaf094f27d4d915136cbffc05516a0/merged major:0 minor:723 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/383fa916b3213c7cce61c3a65c9a5652fe316bf0947e0558b5bdeb50790c1070/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-730:{mountpoint:/var/lib/containers/storage/overlay/b2e27434fd3671135fce2206e23191e238fd2d250ff983e43fb2af9e9069b7f0/merged major:0 minor:730 fsType:overlay blockSize:0} overlay_0-734:{mountpoint:/var/lib/containers/storage/overlay/f45a274ba61dd26bc660d49d7e0a59c6cc039883de4b94c33040b421fa19cfc6/merged major:0 minor:734 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/18f8fc74ef90a0461ad79cceb916355eb3f144e0d5cff548b09144bf5545324a/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/5edd82cd92c23dfc87a1f6084831f1f5a10d465947ac46e9ddea10ee1b7020e4/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/ea82ff706273081190d3870e5fd5a3d89489343b0b1d0ceb26d26cd98e1a8e68/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/c25d1ef97b71aa943edf907bec8658b69432ba928eb53c7147702754c7e9e634/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-756:{mountpoint:/var/lib/containers/storage/overlay/0729d3013c84f2c2cf4f08ee252eb9bf52f1ccdf30d9d0fbf5546d7283bea9f5/merged major:0 minor:756 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/1fdb5290db2fc9187504a97e182aa88a1566cefd3eb217b65beec1d48d2e02e9/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-760:{mountpoint:/var/lib/containers/storage/overlay/5a9584bee7c5128616041cc3f7903ccb6cd2f9bc3b33d613a5c901b4309cb293/merged major:0 minor:760 fsType:overlay blockSize:0} overlay_0-764:{mountpoint:/var/lib/containers/storage/overlay/daaa5b65a678e78346d04ea35066f0fb1f9c54f0a2c068c1ab9bf9e060ce4000/merged major:0 minor:764 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/9a76a412d181067387245e92d3f4570713e7bce7cb4c403ac435c2f8184522c4/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-779:{mountpoint:/var/lib/containers/storage/overlay/dd27a4808b923bd39d3881a881088108cd981b719a4849de2706cae8751172b3/merged major:0 minor:779 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/c6d9ee5b98ee4da2a9f21a7bd2e1f2cddfc4e9640197ae6511ad8590748ec82d/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/82fe4f74c113452ddb97e38d8cbfa09b1c31a529b4e6604718a15878b11332cd/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/93b02dcb23f9f291e75717db0e447764cdfc675912ebb28a4bae4ecea31c5f58/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/db6c0e84020affcf6d0e953ff499fc41d58daa14257ec9f4f170f468c8170b10/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/52c8fc400fa62c4502d290d9b843ceed5229369a5ba50e4bb7b6ba05f8c6a47c/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/00bf70bfcaab1f0efdee95ef1e9e36d56e159868e40e806231125d8836cf1d6e/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/c7c0be6d40b61dc0d2c98a6dae557f80adf1e8cbce76c08edba0373a5cfbaf3f/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/5159e5a6dd570f3c3906d12128ccebd9a5d9cf01c167e6f7bf6dc3d209149724/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/78352015c9dbbf0870cca0f81cb40a7fa029e1b21e72a1f2182f3133924c7122/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/7ee99c5261ce91653733afc2ef2efce35171f2c62810106c77043da77fb109e1/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/5e2a2dcf9ee91734fe92f8225d9d79bfe79ee731e513660806ab2e575871d7bd/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/1573b25a496eaad548c3028daef3ac39b33ceca68a5d7dc08475474e21f1ee54/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/605021ea4b0376db45ccf01296cdcc2e2c687d6cdf5c73c8ac4858ddf7271466/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/5b63b18dd84f3736ac1e323a206361306e3c257e46d54e31b730ba6102718b4e/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-869:{mountpoint:/var/lib/containers/storage/overlay/2affc31f66f255be10de1ceaa84d2c6911b6bc962e95f676d11e46e841f6aaff/merged major:0 minor:869 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/3985916fb0c76cdeece335299ff925d12ddfa3897175f609bbe66b2f5d04045f/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/2862949d5861074103fd83613104ef37bbb09235a68d469ce352ca96831aa4da/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/fba5ab010b1e0b28e858872bb69ab9beb887aeb76b4aaa6971841acbe5cb2ead/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/d70677c7c6feaf9f988db988b4d2b6a37ccd81ff6e800eff5955a36abbf70ffd/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-879:{mountpoint:/var/lib/containers/storage/overlay/67a53bffef035ef4405ceaa7c8a3b559533d56923209fc6349f7c7f1a30fffe5/merged major:0 minor:879 fsType:overlay blockSize:0} overlay_0-881:{mountpoint:/var/lib/containers/storage/overlay/edf6b6840755e1266881195dc776e611f50cee4abd2073156d603e81a02a153d/merged major:0 minor:881 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/64174c6eec32554bba415c2e95c512c3daeb1ed0878e1b4f281d997bda42644d/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/2cea8c1b0a200e4dced8fb3768b55361711dc711b96d67360219b3dedebfde38/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/be9a65b5387d81d425e1f525df469a85c20f2701cea2ff69663ccce289b8c8ab/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/29596896771ded0e175c137fa02d02462ac4f86b01814dc88b91e85558915d8f/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/bb5ad94f9c6d32adac9ba4da0a7b62b98ceed934b071f1e59815b8df3c7802fa/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/45e46c429c27b745aa2eb0f335634c6fe2eeb5eb90548d5c6ebf6f15f267c51b/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-913:{mountpoint:/var/lib/containers/storage/overlay/76e4ad83bbb37f46a30237001a2758cee09367dfd9a4b5a056e1339bbed7dffd/merged major:0 minor:913 fsType:overlay blockSize:0} overlay_0-915:{mountpoint:/var/lib/containers/storage/overlay/2f5b6b6e82b7ff35048aa2a9e62df43640877fc823ca2ac70d2d01371148e03e/merged major:0 minor:915 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/08605e1dad41e1999af4a24086db3f03fee4973ad2dedffed49c7016ae363690/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/96a7084212615f57aa0230168cc1afa901977f701c12c09099521cfc1866cd7e/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-921:{mountpoint:/var/lib/containers/storage/overlay/856fa848d29d0f93373fa9009c59840b78dfc1d8f5a8466ca2d3dab9d18bfcfe/merged major:0 minor:921 fsType:overlay blockSize:0} overlay_0-923:{mountpoint:/var/lib/containers/storage/overlay/7bcdac565d2ba1f2877c9e92d59e731086255b1b3e83e3e451edf0b264eee5f0/merged major:0 minor:923 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/1cd75c10e3e51343699e1bb9d642c598103543783e875912a26820ef48e7eaa8/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-927:{mountpoint:/var/lib/containers/storage/overlay/bbeb323714821432a7d5af8bd0a78b53232325c11936b36906bb3bae9cd99f7a/merged major:0 minor:927 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/615c4de4cb3c6715870660c2c06b8e2274924c2c0285d54cc7d7e38a1871113b/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/7f9366072a5efa83fa26daa89cf9edeae01527c654b728c7d8fd35ca8132b74e/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/bc089e36146f0de6b266594378322ef39680c03f33da1d362398b643f23847be/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/cbc6cf441028ee415c4bf4ab7393b488bd536650a73723e6babf48eafaf90fb5/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/c511e668a21e4fcfb05bb5e762c6074f08559ed00bacbfc22aab9971b179ab30/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/77f4d0e35aac64cda08ec7df2d55bd711cbfba9ed702ed2cb27eb1618b59ff90/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/0aba6a6f69a954b48685b16995bbeae5618feb5676b95252b24136f225ed38b4/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/9aa9aa615e533f3899e2104279d080a4784b3089d13de9a5776d4abe2d01bff6/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/ee3a48320cedea7c9ac32cd7585b9db19db32e773763bff5e95f774ab29dc28e/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-974:{mountpoint:/var/lib/containers/storage/overlay/7fba611a39c276682c75a32d0e07a7d10fd5d6248f18e1dbfb4185dd7372660f/merged major:0 minor:974 fsType:overlay blockSize:0} overlay_0-975:{mountpoint:/var/lib/containers/storage/overlay/af084c1abce6093550185d28d0ce1025ff000066db2fdbc14c5cbbe392dea67c/merged major:0 minor:975 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/adf884d4c303a0e4476ed5a483517d7df9d642469f20fec073b26fcefc9033bc/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/0c393033ca2463ce69afa4fb0e9ef738a9916f08dc4e90272574bc5734a931ac/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/930848b9b81051d1c91222f80e2f7e79a01478eeb6ec4787bb5e15e8a3c43e91/merged major:0 minor:998 fsType:overlay blockSize:0}] Feb 23 13:14:38.315224 master-0 kubenswrapper[26474]: I0223 13:14:38.313695 26474 manager.go:217] Machine: {Timestamp:2026-02-23 13:14:38.312404399 +0000 UTC m=+0.158912096 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:813062fc9ff74d30ae5cd2159d83a791 SystemUUID:813062fc-9ff7-4d30-ae5c-d2159d83a791 BootID:4abb3f7a-5d3d-42f2-a9ae-25fe202cc7d3 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~projected/kube-api-access-sl5r2 DeviceMajor:0 DeviceMinor:533 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~projected/kube-api-access-vg2gm DeviceMajor:0 DeviceMinor:902 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~projected/kube-api-access-ght2z DeviceMajor:0 DeviceMinor:833 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~projected/kube-api-access-m6mk9 DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9959e2d6fc0e6062e1b30f4c255bb412b060833da615243b7b7f9ead8e5237eb/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1176 DeviceMajor:0 DeviceMinor:1176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:500 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-480 DeviceMajor:0 DeviceMinor:480 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~projected/kube-api-access-znjcw DeviceMajor:0 DeviceMinor:831 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ce27248bc3e9346c25a58fe23a84bf9588e2d35effcd9c895b600fb3cca69c80/userdata/shm DeviceMajor:0 DeviceMinor:1187 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~projected/kube-api-access-q78mm DeviceMajor:0 DeviceMinor:1210 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-433 DeviceMajor:0 DeviceMinor:433 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:499 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f17488314313adf8e5d4ca3b5623c6439e87e4c15c926ef56dd3963870bb1fef/userdata/shm DeviceMajor:0 DeviceMinor:505 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:829 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1273 DeviceMajor:0 DeviceMinor:1273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:809 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d0a039f1b3c97c24fad7030cd466c101137e0b84c1b4d70fca972fbe2ee77402/userdata/shm DeviceMajor:0 DeviceMinor:508 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:440 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-646 DeviceMajor:0 DeviceMinor:646 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~projected/kube-api-access-p7b4r DeviceMajor:0 DeviceMinor:814 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b12352eb-04d7-4419-b1bf-d08bca9da599/volumes/kubernetes.io~projected/kube-api-access-cpnzd DeviceMajor:0 DeviceMinor:1059 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1223 DeviceMajor:0 DeviceMinor:1223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1068 DeviceMajor:0 DeviceMinor:1068 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/43ecd54108687a6a19ab0e0e7609a070fe6d95b30fac709f11974346b31eb83b/userdata/shm DeviceMajor:0 DeviceMinor:658 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:488 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~projected/kube-api-access-8j6q5 DeviceMajor:0 DeviceMinor:108 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~projected/kube-api-access-cksnd DeviceMajor:0 DeviceMinor:636 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d07b83dc456c1a725cd00216a0076881595c484156f383050f864fdf8f89296/userdata/shm DeviceMajor:0 DeviceMinor:1060 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1203 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18386753-ec74-456d-838d-98c07c169b4b/volumes/kubernetes.io~projected/kube-api-access-9d6s7 DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4133bbba4cf25e1be6ab1072b03f13b245e190781ec479d3c282a0fc67bb453a/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/145c6dfd849efd46dc26276303a1f5b415b80906ee5317528490f8e2825ca752/userdata/shm DeviceMajor:0 DeviceMinor:617 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1207 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcbd0dfcd13ca5f8a8db77172cb144a3166d04c3140529e3b2606f791e557f0c/userdata/shm DeviceMajor:0 DeviceMinor:503 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~projected/kube-api-access-d6d4r DeviceMajor:0 DeviceMinor:604 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~projected/kube-api-access-n8c76 DeviceMajor:0 DeviceMinor:1208 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-175 DeviceMajor:0 DeviceMinor:175 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1157 DeviceMajor:0 DeviceMinor:1157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:806 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfa9c4cdf55305be5a011d885ede624b7c0239e78eaa9736ed4cc34f79b42e2f/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:603 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:947 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1312 DeviceMajor:0 DeviceMinor:1312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99f14e64-228f-4b9e-991f-ee398fe7bb8a/volumes/kubernetes.io~projected/kube-api-access-p6b4v DeviceMajor:0 DeviceMinor:118 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1221 DeviceMajor:0 DeviceMinor:1221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7c1ea0-e3c1-4494-bb27-058200b93ed7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:107 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:805 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~projected/kube-api-access-f4mkf DeviceMajor:0 DeviceMinor:149 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~projected/kube-api-access-zcqzj DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~projected/kube-api-access-mq2rn DeviceMajor:0 DeviceMinor:588 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:758 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-699 DeviceMajor:0 DeviceMinor:699 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ff7868e-f0d3-4c63-901f-fed11d623cf1/volumes/kubernetes.io~projected/kube-api-access-r6xw4 DeviceMajor:0 DeviceMinor:445 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1177 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-493 DeviceMajor:0 DeviceMinor:493 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-730 DeviceMajor:0 DeviceMinor:730 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1103 DeviceMajor:0 DeviceMinor:1103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ccbaed9-ab28-47c0-a585-648b9251fd11/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1204 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8887c7d6eee650b037c513d33ece3c0abae0325c7cfbd8aa521e15955d8540b/userdata/shm DeviceMajor:0 DeviceMinor:836 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3/userdata/shm DeviceMajor:0 DeviceMinor:625 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:789 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5b459832-b875-49a6-a7c3-253fa6c8e45a/volumes/kubernetes.io~projected/kube-api-access-wg9l8 DeviceMajor:0 DeviceMinor:985 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:1168 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df/userdata/shm DeviceMajor:0 DeviceMinor:1271 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~projected/kube-api-access-znzzv DeviceMajor:0 DeviceMinor:797 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~projected/kube-api-access-mfxjf DeviceMajor:0 DeviceMinor:1169 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1275 DeviceMajor:0 DeviceMinor:1275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-779 DeviceMajor:0 DeviceMinor:779 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1000 DeviceMajor:0 DeviceMinor:1000 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-563 DeviceMajor:0 DeviceMinor:563 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584/userdata/shm DeviceMajor:0 DeviceMinor:436 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~projected/kube-api-access-nnmqj DeviceMajor:0 DeviceMinor:130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1206 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~projected/kube-api-access-7l66s DeviceMajor:0 DeviceMinor:1058 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4df5f2d226a98cd9443f9e29da033c2146ea5a128236486d62e724363fd7a50e/userdata/shm DeviceMajor:0 DeviceMinor:106 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cabaebba9338acf6d962ed84ef7c4c178c189927ee3320bb1144f49c679ae574/userdata/shm DeviceMajor:0 DeviceMinor:354 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-921 DeviceMajor:0 DeviceMinor:921 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a5284f9-cbb7-400b-ab39-bfef60ec198b/volumes/kubernetes.io~projected/kube-api-access-j744d DeviceMajor:0 DeviceMinor:798 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dc83a57-34c5-4c64-97d3-b6191ba690eb/volumes/kubernetes.io~projected/kube-api-access-b64s6 DeviceMajor:0 DeviceMinor:616 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-479 DeviceMajor:0 DeviceMinor:479 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cc4f51accb13c4562d715ad8c6bcb57f3016abaa8450769a7e898e5187b65a3/userdata/shm DeviceMajor:0 DeviceMinor:903 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1074 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1215 DeviceMajor:0 DeviceMinor:1215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-707 DeviceMajor:0 DeviceMinor:707 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:790 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2a89a16b5cdf52d6375f76682e1cfee8f9385caf2527d209e27f16f7df56fbc/userdata/shm DeviceMajor:0 DeviceMinor:1022 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6786dcf48d821a6321a52c765c39223e7ae469bc0400a1737f59d9fc5cdb110/userdata/shm DeviceMajor:0 DeviceMinor:447 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:564 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cad0c8926c7213aaa96592ae903f4500c7805a83b9f9d84dbe60a8d1bef3fe27/userdata/shm DeviceMajor:0 DeviceMinor:722 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~projected/kube-api-access-7cjfj DeviceMajor:0 DeviceMinor:1017 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2306eee29ab96f13a7b6b9bf9f3a4b8c1be47a50f030b34cf5a3b0197274b3fb/userdata/shm DeviceMajor:0 DeviceMinor:486 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-630 DeviceMajor:0 DeviceMinor:630 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:875 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1256 DeviceMajor:0 DeviceMinor:1256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/kube-api-access-jftvv DeviceMajor:0 DeviceMinor:449 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/279e93b0fd46f71a0b7004cb4febe2cd24136f3bf75f93770b5add473b652180/userdata/shm DeviceMajor:0 DeviceMinor:994 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/54001c8e-cb57-47dc-8594-9daed4190bda/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1052 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf57b864-25d7-4420-9052-04dd580a9f7d/volumes/kubernetes.io~projected/kube-api-access-bdbct DeviceMajor:0 DeviceMinor:810 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1165 DeviceMajor:0 DeviceMinor:1165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a2d420757c83bca1045a3ec4516092ddbbd8abbf4f20e54b4c522c5d6328b82/userdata/shm DeviceMajor:0 DeviceMinor:845 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1292 DeviceMajor:0 DeviceMinor:1292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3d2ae481af3820c6d335cc284f48b3c5d01e31588b587ef4c932a0770497923/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:635 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ede583b-44b0-42af-92c9-f7b8938f7843/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:807 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~projected/kube-api-access-wjkkc DeviceMajor:0 DeviceMinor:803 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1079 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08edfd088420ec54fe8d544a9cf3834e313b838e6833ffb5cbc7d4df5c13203d/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:498 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-756 DeviceMajor:0 DeviceMinor:756 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6193af77058187e06675d2dcef9a4d240856c04b59dbfbf3639238a188d008e4/userdata/shm DeviceMajor:0 DeviceMinor:832 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1010 DeviceMajor:0 DeviceMinor:1010 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b907614a9591efb37b88a7686e4a790de265f0304e777404050b8a95d8f70969/userdata/shm DeviceMajor:0 DeviceMinor:512 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/190e3738a8b34a408e3ce9a92a26c53265addc4888d71fa248a2acde65380192/userdata/shm DeviceMajor:0 DeviceMinor:819 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~projected/kube-api-access-qfqmb DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/541d291f058dd5b70e07bddf8e5c1d943bc102cd629ce0c0ca5aee055819cc00/userdata/shm DeviceMajor:0 DeviceMinor:835 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ea854c71b4635030d564de3c9b4bda5092d758842842ec36011dafb1d8036a8/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b0d84-a344-43e4-b9c4-c8e0670528de/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e5802841-52dc-4d15-a252-0eac70e9fbbc/volumes/kubernetes.io~projected/kube-api-access-nvg7b DeviceMajor:0 DeviceMinor:353 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1007 DeviceMajor:0 DeviceMinor:1007 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1056 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:494 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-881 DeviceMajor:0 DeviceMinor:881 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1101 DeviceMajor:0 DeviceMinor:1101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e39cd7913139ec4a31a1146d0be93b2e15f9987bc352db68468df526793c9e90/userdata/shm DeviceMajor:0 DeviceMinor:154 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/71cb2f21-6d27-411f-9c2f-d5fa286895a7/volumes/kubernetes.io~projected/kube-api-access-wkxv7 DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/922e0be5-23c2-481e-89be-e918dc4ce90c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:502 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-543 DeviceMajor:0 DeviceMinor:543 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:791 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1072 DeviceMajor:0 DeviceMinor:1072 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a16ebd5549e68460cc5eab019554f78bf08c0501964eec5eb6763ec49c8e6ef3/userdata/shm DeviceMajor:0 DeviceMinor:352 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-609 DeviceMajor:0 DeviceMinor:609 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1039 DeviceMajor:0 DeviceMinor:1039 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8422281d-af45-4f17-8f15-ac3fd9da4bbc/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:602 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0ec420225c84b73dd443227473f7e5be0a534249c7a20c4f249c305d05092cd3/userdata/shm DeviceMajor:0 DeviceMinor:1309 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75945fe5446b39503e5979f2f52d34856c1818094eabf5941cf78b6b1ecb46b2/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/482e25d3c964b6d3e2b3936d268fb90a41f7791876fc0fe26d190a21ad959690/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-59 DeviceMajor:0 DeviceMinor:59 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/kube-api-access-9qsvg DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-546 DeviceMajor:0 DeviceMinor:546 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-879 DeviceMajor:0 DeviceMinor:879 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-571 DeviceMajor:0 DeviceMinor:571 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-491 DeviceMajor:0 DeviceMinor:491 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5793184d-de96-49ad-a060-0fa0cf278a9c/volumes/kubernetes.io~projected/kube-api-access-v9dcr DeviceMajor:0 DeviceMinor:341 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/309d089bca6e7c97d1cbeac6a63a1ce937ecc0912c1d3b3166d1ba3db4f77535/userdata/shm DeviceMajor:0 DeviceMinor:641 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/898e6c96-73d5-4dc5-a383-986599a5bcd9/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:828 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8b0e50-59ee-44a9-9a66-8febb833b771/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1199 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/92eaa2e2-61cd-4279-a81f-72db51308148/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-560 DeviceMajor:0 DeviceMinor:560 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-678 DeviceMajor:0 DeviceMinor:678 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1182 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1189 DeviceMajor:0 DeviceMinor:1189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcc6cec1e5cb2ad6735082c479ccfca43dd610036ae64420869156c1921dfe15/userdata/shm DeviceMajor:0 DeviceMinor:840 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/volumes/kubernetes.io~projected/kube-api-access-mhbhv DeviceMajor:0 DeviceMinor:794 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1029 DeviceMajor:0 DeviceMinor:1029 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-974 DeviceMajor:0 DeviceMinor:974 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/968a9822c87f73e9559c28309a177baff6729af2cf700098ba1888ec0387b7bc/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3daf0176-92e7-4642-8643-4afbefb77235/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1304 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a663ecaf-ced2-4c7d-91c8-44e94851f7d6/volumes/kubernetes.io~projected/kube-api-access-nn9mt DeviceMajor:0 DeviceMinor:811 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7536105ed8ca3b11d6f435a7c98206e700714a5620adc113fdcbd58553c7a29/userdata/shm DeviceMajor:0 DeviceMinor:837 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee9a1940c33a806fb7d2b67d55759c2800825e955f42196e80b987da264d740b/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d87fae590598a273f246a10c0bceaf42bb07fba93878914b2833795c3815488b/userdata/shm DeviceMajor:0 DeviceMinor:377 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1172 DeviceMajor:0 DeviceMinor:1172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:827 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-569 DeviceMajor:0 DeviceMinor:569 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~projected/kube-api-access-l8wvx DeviceMajor:0 DeviceMinor:1270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~projected/kube-api-access-gt4vh DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~projected/kube-api-access-4mkd2 DeviceMajor:0 DeviceMinor:596 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1035 DeviceMajor:0 DeviceMinor:1035 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29126ab2-a689-4b0e-a1f4-4faed19b0fbc/volumes/kubernetes.io~projected/kube-api-access-nwrjc DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~projected/kube-api-access-4b825 DeviceMajor:0 DeviceMinor:419 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-468 DeviceMajor:0 DeviceMinor:468 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:634 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasIn Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: odes:true} {Device:/run/containers/storage/overlay-containers/c82f333c34ccef505d90be2625a653e8f37ab4380674ab7fe3db008c649abee9/userdata/shm DeviceMajor:0 DeviceMinor:163 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a5cd0e8536fcc54350ba490f0eb9ca59486f86834d7ae3d682b2a13eefc4e56/userdata/shm DeviceMajor:0 DeviceMinor:504 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/762249c6-b548-4733-8b78-64f73430bfbd/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:1167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d48d286d-4f37-4027-86cd-1580e6076613/volumes/kubernetes.io~projected/kube-api-access-fzdfs DeviceMajor:0 DeviceMinor:102 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d71885db-c29e-429a-aa1f-1c274796a69f/volumes/kubernetes.io~projected/kube-api-access-9z9jc DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1236 DeviceMajor:0 DeviceMinor:1236 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-869 DeviceMajor:0 DeviceMinor:869 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0e3072-a35c-4404-891c-f31fafd0b4b1/volumes/kubernetes.io~projected/kube-api-access-rmcjv DeviceMajor:0 DeviceMinor:792 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1139 DeviceMajor:0 DeviceMinor:1139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da6c81e13c5cd804d420f7f4edf19815f1956a13d253393fd72b5fcf83a8c917/userdata/shm DeviceMajor:0 DeviceMinor:124 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:495 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1070 DeviceMajor:0 DeviceMinor:1070 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/752f9e4ea839752f549240b10d1d3e2131f24a8cc548f81bd7a8c88ec615bb72/userdata/shm DeviceMajor:0 DeviceMinor:1217 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b8bdbf92-61e3-41e9-a48d-4259cee80e9f/volumes/kubernetes.io~projected/kube-api-access-t9lvg DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1098 DeviceMajor:0 DeviceMinor:1098 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86bfbedca58264a38e839a587d5e58f4d6fbf5d20a12071a7c803f4a3f76ad13/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:501 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9b39c2081f67776044f305ac72592df90a62e4fce161411cf7d76ff26a6efb9/userdata/shm DeviceMajor:0 DeviceMinor:615 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5a1d137318bed9fd566b8260909991bd75bf6152bc142fe74433e2215565edb/userdata/shm DeviceMajor:0 DeviceMinor:793 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9864787648d4e15093640c185e770cdfc44c9c159ce5adfbe7392aee39b016ba/userdata/shm DeviceMajor:0 DeviceMinor:126 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f2c50f9a-8c73-4cb9-9cbf-2565496212a6/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:416 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46/userdata/shm DeviceMajor:0 DeviceMinor:600 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:804 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78a724cb7e61bebf9049146706c4cceb43b4093f3bcdd9805faeca5b8c0a66f6/userdata/shm DeviceMajor:0 DeviceMinor:842 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1180 DeviceMajor:0 DeviceMinor:1180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9bed6748-374e-4d8a-92a0-36d7d735d6b7/volumes/kubernetes.io~projected/kube-api-access-pntn4 DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77ea2b54-bcc2-4c4e-9415-03984721b5b1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:633 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:476 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7cadeb05-9298-4bcf-b6f2-659c68eba020/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:826 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/019a9bcf24ce5ea8628fb0a222b64597a0b233bcb8a8eee4032689bd4a953ff1/userdata/shm DeviceMajor:0 DeviceMinor:428 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-913 DeviceMajor:0 DeviceMinor:913 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~projected/kube-api-access-p2mhb DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e941c759-ab95-4b30-a571-6c132ab0e639/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:720 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d134032-1c35-4b69-9336-bcdc9c1cb87d/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:802 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:446 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:605 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-764 DeviceMajor:0 DeviceMinor:764 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34cdfb95fd9aaeec9095ab977f01f62ceb0c9128bfe1c704df13557634391673/userdata/shm DeviceMajor:0 DeviceMinor:1211 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1041 DeviceMajor:0 DeviceMinor:1041 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1048 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-923 DeviceMajor:0 DeviceMinor:923 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1205 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/540b41b0-f574-46b9-8b2f-19e90ad5d0ce/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:148 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fce1914660e945c88d472e8a5d86bf17798d1db67260addab80c44f005293735/userdata/shm DeviceMajor:0 DeviceMinor:1065 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-536 DeviceMajor:0 DeviceMinor:536 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0bb71701a766cdffc304bcc019ce529a6db6f3a0ac5021de9ad0b58b382526fa/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9f67d-0b27-41e3-ba4c-ed9cca25703e/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:460 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-519 DeviceMajor:0 DeviceMinor:519 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-668 DeviceMajor:0 DeviceMinor:668 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ea16701-bd22-4fc0-90ea-f114b52574f8/volumes/kubernetes.io~projected/kube-api-access-22p85 DeviceMajor:0 DeviceMinor:1209 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54d3f0365402e28903e1c308fe49c67a2e4e6a051bae978305cca7c73e782ab8/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/384035756c76cf55d496b6a1d9d1c2ae74da05b1b5fe6f287cc5eebff9461073/userdata/shm DeviceMajor:0 DeviceMinor:482 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-915 DeviceMajor:0 DeviceMinor:915 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-927 DeviceMajor:0 DeviceMinor:927 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/35e97ed9-695d-483e-8878-4f231c79f1d2/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:496 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-723 DeviceMajor:0 DeviceMinor:723 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f575cb15d53ccde2ef110c34dc5bda0d2dd2200d5c840f4afa64c209dc8f16aa/userdata/shm DeviceMajor:0 DeviceMinor:366 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-734 DeviceMajor:0 DeviceMinor:734 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6dc6e354bc34576e2c51f7938beb3db18ad7bf25caa74761e84175165545f5f8/userdata/shm DeviceMajor:0 DeviceMinor:815 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e96ce488-0099-43de-9933-425b7c981055/volumes/kubernetes.io~projected/kube-api-access-7xp47 DeviceMajor:0 DeviceMinor:1012 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1238 DeviceMajor:0 DeviceMinor:1238 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1300 DeviceMajor:0 DeviceMinor:1300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc3bdf010ad07d49b80af021e765b99347775940dd1bba2296554fae89223428/userdata/shm DeviceMajor:0 DeviceMinor:461 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/4b9d6485-cf67-49c5-99c1-b8582a0bab70/volumes/kubernetes.io~projected/kube-api-access-tgfqh DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:808 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/06ccd378-23ee-49b7-a435-4b01de772155/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1015 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2ea613937f7a76a28d6f697b6791b043bc923f7e0e6135b9a4e3874c4b94ea7/userdata/shm DeviceMajor:0 DeviceMinor:817 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391/userdata/shm DeviceMajor:0 DeviceMinor:185 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1242 DeviceMajor:0 DeviceMinor:1242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1245 DeviceMajor:0 DeviceMinor:1245 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a544f5a-06b6-4297-a845-d81e9ab9ece7/volumes/kubernetes.io~projected/kube-api-access-t5zks DeviceMajor:0 DeviceMinor:344 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-141 DeviceMajor:0 DeviceMinor:141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/683ce0dfc8d9412d9d124855df897567cd87cd72bf0e18113725c86bfc97ad40/userdata/shm DeviceMajor:0 DeviceMinor:326 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1231 DeviceMajor:0 DeviceMinor:1231 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d7c80f4d-6b28-44f4-beef-01e705260452/volumes/kubernetes.io~projected/kube-api-access-d7sfw DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f81886b9-fcd3-4666-b550-0688072210f7/volumes/kubernetes.io~projected/kube-api-access-tmrjc DeviceMajor:0 DeviceMinor:321 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-173 DeviceMajor:0 DeviceMinor:173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-975 DeviceMajor:0 DeviceMinor:975 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef6e18a1f50bdcddbb2a3ad1b4629af6c829d77c4f6ad1ac29b99eb32aa8f0b7/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2acc6d35-5679-4fac-970f-3d2ff954cc33/volumes/kubernetes.io~projected/kube-api-access-kc6cl DeviceMajor:0 DeviceMinor:608 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24d878bd-05cd-414e-94c1-a3e9ce637331/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:481 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/47dedc5d-1288-4020-b481-5dca68a7d437/volumes/kubernetes.io~projected/kube-api-access-hhq2x DeviceMajor:0 DeviceMinor:813 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~projected/kube-api-access-2fsdx DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b96edb6392cb5934a494772ea08cf68bb7ca6b123007bbc36b64354a478256c/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d9b02d3c-f671-4850-8c6e-315044a1376c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8df5908ff558e7c538aba1ffb0d5c449e7824bb42a6bb700748a71cb6ece532/userdata/shm DeviceMajor:0 DeviceMinor:507 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6ba7ab38272b1980170e10ad4e2da3b1c3208a4fc9cba6639e6e32d852d5560/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/28b57766d3c93686f32849ca9f209837a911e1a78e068a18fdf8af950eee54e7/userdata/shm DeviceMajor:0 DeviceMinor:897 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-135 DeviceMajor:0 DeviceMinor:135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/affc63b7-db45-429d-82ff-e50f6aae51dc/volumes/kubernetes.io~projected/kube-api-access-5z8xh DeviceMajor:0 DeviceMinor:795 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f93af9-bdbb-4319-8ddb-e5458e8a9275/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:58 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1330 DeviceMajor:0 DeviceMinor:1330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce55de54-8441-4a16-8b57-598042869000/volumes/kubernetes.io~projected/kube-api-access-6sh26 DeviceMajor:0 DeviceMinor:812 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b0a29266-d968-444d-82bb-085ff1d6e506/volumes/kubernetes.io~projected/kube-api-access-zx8dp DeviceMajor:0 DeviceMinor:1178 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-760 DeviceMajor:0 DeviceMinor:760 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/878aa813-a8b9-4a6f-8086-778df276d0d7/volumes/kubernetes.io~projected/kube-api-access-58xrl DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/57803492-e1dd-4994-8330-1e9b393d54fd/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:901 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/404ed298090e42206d6c1bf4817333cb8ad6772bcba069fb38caf746806a1e14/userdata/shm DeviceMajor:0 DeviceMinor:1212 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d0a976c-1492-4989-a5ff-e386564dd6ba/volumes/kubernetes.io~projected/kube-api-access-wplcg DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d58817c-970f-47b1-a5a5-a491f3e93426/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:497 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-691 DeviceMajor:0 DeviceMinor:691 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/860f93b7f1bef63565efa90fb0877c6e364d6648096b1b89b73c03207fe0536b/userdata/shm DeviceMajor:0 DeviceMinor:1013 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3/volumes/kubernetes.io~projected/kube-api-access-wfl9v DeviceMajor:0 DeviceMinor:667 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d2460f89a8ee17465ca8b123cba158a911b19401cc35323955dae6be552d4e5d/userdata/shm DeviceMajor:0 DeviceMinor:1064 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1224 DeviceMajor:0 DeviceMinor:1224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-212 DeviceMajor:0 DeviceMinor:212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/945907dd-f6b3-400f-b539-e1310eb11dd7/volumes/kubernetes.io~projected/kube-api-access-wbk8g DeviceMajor:0 DeviceMinor:896 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/652aff135f7d81e9986b86e7980c1074aab42baa8a4fc667f78c2a4b153be766/userdata/shm DeviceMajor:0 DeviceMinor:800 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f348bffa-b2f6-4695-88a7-923625e7fb02/volumes/kubernetes.io~projected/kube-api-access-5wr82 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73ba4f16-0217-4bf1-8fc2-6b385eda0771/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1057 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdad149d-da6f-49ac-85e5-deb01f161166/volumes/kubernetes.io~projected/kube-api-access-llgnr DeviceMajor:0 DeviceMinor:1080 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5aa73f30470446484267ee08c4016bd9826913f9a65531b7d70349b1291252e/userdata/shm DeviceMajor:0 DeviceMinor:1170 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e/volumes/kubernetes.io~projected/kube-api-access-ndf8h DeviceMajor:0 DeviceMinor:1308 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:019a9bcf24ce5ea MacAddress:c2:25:cc:81:23:39 Speed:10000 Mtu:8900} {Name:08edfd088420ec5 MacAddress:f2:01:dc:d2:fc:e9 Speed:10000 Mtu:8900} {Name:0bb71701a766cdf MacAddress:ce:42:f4:51:d3:2c Speed:10000 Mtu:8900} {Name:0ec420225c84b73 MacAddress:ba:44:e9:8d:99:41 Speed:10000 Mtu:8900} {Name:190e3738a8b34a4 MacAddress:36:31:1f:e1:d5:2a Speed:10000 Mtu:8900} {Name:2306eee29ab96f1 MacAddress:a2:34:1e:16:6a:a9 Speed:10000 Mtu:8900} {Name:28b57766d3c9368 MacAddress:aa:c9:d6:27:f0:8d Speed:10000 Mtu:8900} {Name:309d089bca6e7c9 MacAddress:8e:d8:72:3e:6c:98 Speed:10000 Mtu:8900} {Name:3a2d420757c83bc MacAddress:ce:aa:ef:48:ee:4f Speed:10000 Mtu:8900} {Name:404ed298090e422 MacAddress:7e:2c:0b:26:5f:ce Speed:10000 Mtu:8900} {Name:4133bbba4cf25e1 MacAddress:46:06:8a:81:e8:8f Speed:10000 Mtu:8900} {Name:43ecd54108687a6 MacAddress:8a:4a:08:38:be:7c Speed:10000 Mtu:8900} {Name:541d291f058dd5b MacAddress:96:c1:8f:cd:90:0c Speed:10000 Mtu:8900} {Name:54d3f0365402e28 MacAddress:b2:d9:11:e1:8a:1a Speed:10000 Mtu:8900} {Name:6193af77058187e MacAddress:a6:63:0d:43:7c:bc Speed:10000 Mtu:8900} {Name:652aff135f7d81e MacAddress:46:76:b0:74:cc:d4 Speed:10000 Mtu:8900} {Name:655e1b023cf7c57 MacAddress:32:d4:75:5c:09:30 Speed:10000 Mtu:8900} {Name:683ce0dfc8d9412 MacAddress:2e:e7:01:a2:57:30 Speed:10000 Mtu:8900} {Name:6a5cd0e8536fcc5 MacAddress:26:d8:16:bf:5b:81 Speed:10000 Mtu:8900} {Name:6dc6e354bc34576 MacAddress:02:0c:d7:a9:23:c4 Speed:10000 Mtu:8900} {Name:752f9e4ea839752 MacAddress:e6:b7:ef:7b:51:70 Speed:10000 Mtu:8900} {Name:75945fe5446b395 MacAddress:e2:9c:87:d8:b5:37 Speed:10000 Mtu:8900} {Name:78a724cb7e61beb MacAddress:e6:61:13:e1:3e:fd Speed:10000 Mtu:8900} {Name:860f93b7f1bef63 MacAddress:02:80:9b:80:35:3a Speed:10000 Mtu:8900} {Name:86bfbedca58264a MacAddress:ce:f4:b9:e8:55:16 Speed:10000 Mtu:8900} {Name:9864787648d4e15 MacAddress:0a:b5:e3:a8:ca:bc Speed:10000 Mtu:8900} {Name:9959e2d6fc0e606 MacAddress:da:cc:c4:60:e2:2b Speed:10000 Mtu:8900} {Name:a16ebd5549e6846 MacAddress:5a:f5:0c:15:8d:26 Speed:10000 Mtu:8900} {Name:a6ba7ab38272b19 MacAddress:ea:1b:e0:ba:66:ca Speed:10000 Mtu:8900} {Name:a8df5908ff558e7 MacAddress:26:9a:bc:cd:da:f6 Speed:10000 Mtu:8900} {Name:a9b39c2081f6777 MacAddress:9a:27:fb:57:a9:20 Speed:10000 Mtu:8900} {Name:aacef958695b165 MacAddress:ca:b5:2f:1a:8f:7c Speed:10000 Mtu:8900} {Name:b2ea613937f7a76 MacAddress:aa:6d:3e:b5:30:a5 Speed:10000 Mtu:8900} {Name:b907614a9591efb MacAddress:6a:0f:ad:96:af:b9 Speed:10000 Mtu:8900} {Name:b933426682f905b MacAddress:6a:4f:1d:98:87:39 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:92:c2:5d:57:03 Speed:0 Mtu:8900} {Name:cabaebba9338acf MacAddress:96:e2:6e:16:3c:f6 Speed:10000 Mtu:8900} {Name:cad0c8926c7213a MacAddress:b2:c5:5b:90:63:7e Speed:10000 Mtu:8900} {Name:ce27248bc3e9346 MacAddress:96:08:bd:ab:89:df Speed:10000 Mtu:8900} {Name:cfa9c4cdf55305b MacAddress:f2:d0:62:3f:d7:7f Speed:10000 Mtu:8900} {Name:d0a039f1b3c97c2 MacAddress:4a:02:a8:9d:b4:8b Speed:10000 Mtu:8900} {Name:d2460f89a8ee174 MacAddress:3e:ee:8c:c0:81:14 Speed:10000 Mtu:8900} {Name:d3d2ae481af3820 MacAddress:be:23:3d:8f:ff:64 Speed:10000 Mtu:8900} {Name:d6786dcf48d821a MacAddress:22:c8:26:b4:e4:89 Speed:10000 Mtu:8900} {Name:e7536105ed8ca3b MacAddress:7e:dd:d9:6e:07:25 Speed:10000 Mtu:8900} {Name:e8887c7d6eee650 MacAddress:72:16:76:72:4f:99 Speed:10000 Mtu:8900} {Name:ee9a1940c33a806 MacAddress:c6:32:92:48:4c:0f Speed:10000 Mtu:8900} {Name:ef6e18a1f50bdcd MacAddress:7e:01:66:05:96:e8 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:69:55:75 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3c:f3:9f Speed:-1 Mtu:9000} {Name:f17488314313adf MacAddress:36:72:e4:17:17:98 Speed:10000 Mtu:8900} {Name:f2a89a16b5cdf52 MacAddress:be:63:a2:ea:f8:79 Speed:10000 Mtu:8900} {Name:f575cb15d53ccde MacAddress:42:d1:05:e2:f4:d1 Speed:10000 Mtu:8900} {Name:f5a1d137318bed9 MacAddress:be:4c:e2:9b:d9:f0 Speed:10000 Mtu:8900} {Name:f5aa73f30470446 MacAddress:c2:5d:d6:e6:e7:02 Speed:10000 Mtu:8900} {Name:fc3bdf010ad07d4 MacAddress:de:a6:6e:7c:bf:07 Speed:10000 Mtu:8900} {Name:fcbd0dfcd13ca5f MacAddress:ee:22:80:80:1d:14 Speed:10000 Mtu:8900} {Name:fce1914660e945c MacAddress:92:f7:e8:b9:05:a1 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:de:1e:e7:99:8c:28 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315214 26474 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315286 26474 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315545 26474 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315694 26474 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315719 26474 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315914 26474 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315923 26474 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315930 26474 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315950 26474 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:14:38.315961 master-0 kubenswrapper[26474]: I0223 13:14:38.315986 26474 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316074 26474 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316130 26474 kubelet.go:418] "Attempting to sync node with API server" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316144 26474 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316158 26474 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316170 26474 kubelet.go:324] "Adding apiserver pod source" Feb 23 13:14:38.317460 master-0 kubenswrapper[26474]: I0223 13:14:38.316181 26474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 13:14:38.318104 master-0 kubenswrapper[26474]: I0223 13:14:38.318034 26474 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 13:14:38.318437 master-0 kubenswrapper[26474]: I0223 13:14:38.318397 26474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 13:14:38.318879 master-0 kubenswrapper[26474]: I0223 13:14:38.318838 26474 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 13:14:38.319081 master-0 kubenswrapper[26474]: I0223 13:14:38.319054 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319085 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319097 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319106 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319115 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319125 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319135 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 13:14:38.319143 master-0 kubenswrapper[26474]: I0223 13:14:38.319144 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 13:14:38.319750 master-0 kubenswrapper[26474]: I0223 13:14:38.319181 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 13:14:38.319750 master-0 kubenswrapper[26474]: I0223 13:14:38.319193 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 13:14:38.319750 master-0 kubenswrapper[26474]: I0223 13:14:38.319221 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 13:14:38.319750 master-0 kubenswrapper[26474]: I0223 13:14:38.319238 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 13:14:38.319750 master-0 kubenswrapper[26474]: I0223 13:14:38.319278 26474 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 13:14:38.320168 master-0 kubenswrapper[26474]: I0223 13:14:38.320127 26474 server.go:1280] "Started kubelet" Feb 23 13:14:38.320392 master-0 kubenswrapper[26474]: I0223 13:14:38.320286 26474 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 13:14:38.320646 master-0 kubenswrapper[26474]: I0223 13:14:38.320319 26474 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 13:14:38.320709 master-0 kubenswrapper[26474]: I0223 13:14:38.320681 26474 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 13:14:38.323474 master-0 kubenswrapper[26474]: I0223 13:14:38.321280 26474 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 13:14:38.323474 master-0 kubenswrapper[26474]: I0223 13:14:38.323241 26474 server.go:449] "Adding debug handlers to kubelet server" Feb 23 13:14:38.321569 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 13:14:38.354333 master-0 kubenswrapper[26474]: I0223 13:14:38.354055 26474 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 13:14:38.354333 master-0 kubenswrapper[26474]: I0223 13:14:38.354129 26474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 13:14:38.354333 master-0 kubenswrapper[26474]: I0223 13:14:38.354243 26474 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 13:14:38.354333 master-0 kubenswrapper[26474]: I0223 13:14:38.354274 26474 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 13:14:38.354333 master-0 kubenswrapper[26474]: E0223 13:14:38.354306 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.354592 master-0 kubenswrapper[26474]: I0223 13:14:38.354202 26474 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:51:08 +0000 UTC, rotation deadline is 2026-02-24 06:31:20.738187705 +0000 UTC Feb 23 13:14:38.354592 master-0 kubenswrapper[26474]: I0223 13:14:38.354501 26474 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h16m42.383689197s for next certificate rotation Feb 23 13:14:38.354592 master-0 kubenswrapper[26474]: I0223 13:14:38.354518 26474 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 13:14:38.356291 master-0 kubenswrapper[26474]: I0223 13:14:38.356147 26474 factory.go:55] Registering systemd factory Feb 23 13:14:38.356291 master-0 kubenswrapper[26474]: I0223 13:14:38.356186 26474 factory.go:221] Registration of the systemd container factory successfully Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.356743 26474 factory.go:153] Registering CRI-O factory Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.356770 26474 factory.go:221] Registration of the crio container factory successfully Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.356863 26474 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.356897 26474 factory.go:103] Registering Raw factory Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.356914 26474 manager.go:1196] Started watching for new ooms in manager Feb 23 13:14:38.358180 master-0 kubenswrapper[26474]: I0223 13:14:38.357743 26474 manager.go:319] Starting recovery of all containers Feb 23 13:14:38.376012 master-0 kubenswrapper[26474]: I0223 13:14:38.375917 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35e97ed9-695d-483e-8878-4f231c79f1d2" volumeName="kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb" seLinuxMountContext="" Feb 23 13:14:38.376012 master-0 kubenswrapper[26474]: I0223 13:14:38.376007 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376024 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376036 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376053 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376064 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="affc63b7-db45-429d-82ff-e50f6aae51dc" volumeName="kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376075 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376088 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376106 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376118 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376130 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b" volumeName="kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376144 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f47fa225-93fd-458b-b450-a0411e629afd" volumeName="kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376157 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47dedc5d-1288-4020-b481-5dca68a7d437" volumeName="kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376175 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376187 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f47fa225-93fd-458b-b450-a0411e629afd" volumeName="kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376198 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376209 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" volumeName="kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376222 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acc6d35-5679-4fac-970f-3d2ff954cc33" volumeName="kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376252 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376269 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.376276 master-0 kubenswrapper[26474]: I0223 13:14:38.376282 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376297 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376313 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acc6d35-5679-4fac-970f-3d2ff954cc33" volumeName="kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376329 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376379 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376398 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" volumeName="kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376415 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="affc63b7-db45-429d-82ff-e50f6aae51dc" volumeName="kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376433 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376451 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376468 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57803492-e1dd-4994-8330-1e9b393d54fd" volumeName="kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376486 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376499 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376521 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="945907dd-f6b3-400f-b539-e1310eb11dd7" volumeName="kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376536 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376549 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b459832-b875-49a6-a7c3-253fa6c8e45a" volumeName="kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376564 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b459832-b875-49a6-a7c3-253fa6c8e45a" volumeName="kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376577 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ff7868e-f0d3-4c63-901f-fed11d623cf1" volumeName="kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376591 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" volumeName="kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376603 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376667 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a5284f9-cbb7-400b-ab39-bfef60ec198b" volumeName="kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376688 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ede583b-44b0-42af-92c9-f7b8938f7843" volumeName="kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376701 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376716 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376735 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" volumeName="kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376749 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d134032-1c35-4b69-9336-bcdc9c1cb87d" volumeName="kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376763 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2acc6d35-5679-4fac-970f-3d2ff954cc33" volumeName="kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376776 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376789 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e96ce488-0099-43de-9933-425b7c981055" volumeName="kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376804 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e96ce488-0099-43de-9933-425b7c981055" volumeName="kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376818 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376834 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" volumeName="kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376849 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" volumeName="kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376868 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce55de54-8441-4a16-8b57-598042869000" volumeName="kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376884 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376899 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2c50f9a-8c73-4cb9-9cbf-2565496212a6" volumeName="kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376915 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376933 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73ba4f16-0217-4bf1-8fc2-6b385eda0771" volumeName="kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.376987 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f47fa225-93fd-458b-b450-a0411e629afd" volumeName="kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377007 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377023 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce55de54-8441-4a16-8b57-598042869000" volumeName="kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377038 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377060 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d134032-1c35-4b69-9336-bcdc9c1cb87d" volumeName="kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377076 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73ba4f16-0217-4bf1-8fc2-6b385eda0771" volumeName="kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377090 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377105 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377122 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377137 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7" seLinuxMountContext="" Feb 23 13:14:38.377121 master-0 kubenswrapper[26474]: I0223 13:14:38.377151 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377165 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ccbaed9-ab28-47c0-a585-648b9251fd11" volumeName="kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377181 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377196 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377211 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377227 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0e3072-a35c-4404-891c-f31fafd0b4b1" volumeName="kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377242 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377274 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" volumeName="kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377292 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ff7868e-f0d3-4c63-901f-fed11d623cf1" volumeName="kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377309 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73ba4f16-0217-4bf1-8fc2-6b385eda0771" volumeName="kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377325 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d76d5e5a-3009-42c9-b981-e6ddfa3ba13e" volumeName="kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377360 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdad149d-da6f-49ac-85e5-deb01f161166" volumeName="kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377379 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ccbaed9-ab28-47c0-a585-648b9251fd11" volumeName="kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377396 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377414 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377428 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="945907dd-f6b3-400f-b539-e1310eb11dd7" volumeName="kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377493 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8b0e50-59ee-44a9-9a66-8febb833b771" volumeName="kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377512 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce55de54-8441-4a16-8b57-598042869000" volumeName="kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377529 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24d878bd-05cd-414e-94c1-a3e9ce637331" volumeName="kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377546 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cadeb05-9298-4bcf-b6f2-659c68eba020" volumeName="kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377563 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8422281d-af45-4f17-8f15-ac3fd9da4bbc" volumeName="kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377580 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377595 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="898e6c96-73d5-4dc5-a383-986599a5bcd9" volumeName="kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377609 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0a29266-d968-444d-82bb-085ff1d6e506" volumeName="kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377628 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c61886-6cc7-44aa-b56a-81cdcc670993" volumeName="kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377645 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377660 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377673 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8bdbf92-61e3-41e9-a48d-4259cee80e9f" volumeName="kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377687 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377702 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06ccd378-23ee-49b7-a435-4b01de772155" volumeName="kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377718 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a5284f9-cbb7-400b-ab39-bfef60ec198b" volumeName="kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377732 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0e3072-a35c-4404-891c-f31fafd0b4b1" volumeName="kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377746 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" volumeName="kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377761 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ede583b-44b0-42af-92c9-f7b8938f7843" volumeName="kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377776 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377795 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c61886-6cc7-44aa-b56a-81cdcc670993" volumeName="kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377809 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" volumeName="kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377833 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57803492-e1dd-4994-8330-1e9b393d54fd" volumeName="kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377850 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ede583b-44b0-42af-92c9-f7b8938f7843" volumeName="kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377866 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cf2e1eb-fb95-4401-9112-57aee9ebe1e6" volumeName="kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377884 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8b0e50-59ee-44a9-9a66-8febb833b771" volumeName="kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377899 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0a29266-d968-444d-82bb-085ff1d6e506" volumeName="kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377913 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf57b864-25d7-4420-9052-04dd580a9f7d" volumeName="kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377927 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf57b864-25d7-4420-9052-04dd580a9f7d" volumeName="kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377941 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce55de54-8441-4a16-8b57-598042869000" volumeName="kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377958 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24d878bd-05cd-414e-94c1-a3e9ce637331" volumeName="kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377973 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.377986 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8422281d-af45-4f17-8f15-ac3fd9da4bbc" volumeName="kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378000 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" volumeName="kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378012 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8b0e50-59ee-44a9-9a66-8febb833b771" volumeName="kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: E0223 13:14:38.378145 26474 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378025 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378702 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f93af9-bdbb-4319-8ddb-e5458e8a9275" volumeName="kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378720 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47dedc5d-1288-4020-b481-5dca68a7d437" volumeName="kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378766 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0e3072-a35c-4404-891c-f31fafd0b4b1" volumeName="kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378783 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71885db-c29e-429a-aa1f-1c274796a69f" volumeName="kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378799 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378845 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8b0e50-59ee-44a9-9a66-8febb833b771" volumeName="kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378865 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d76d5e5a-3009-42c9-b981-e6ddfa3ba13e" volumeName="kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378881 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35e97ed9-695d-483e-8878-4f231c79f1d2" volumeName="kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378897 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b9d6485-cf67-49c5-99c1-b8582a0bab70" volumeName="kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378938 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5793184d-de96-49ad-a060-0fa0cf278a9c" volumeName="kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378960 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73ba4f16-0217-4bf1-8fc2-6b385eda0771" volumeName="kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378973 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.378988 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" volumeName="kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379032 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e941c759-ab95-4b30-a571-6c132ab0e639" volumeName="kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379047 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18386753-ec74-456d-838d-98c07c169b4b" volumeName="kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379062 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ede583b-44b0-42af-92c9-f7b8938f7843" volumeName="kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379106 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379124 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379139 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b0d84-a344-43e4-b9c4-c8e0670528de" volumeName="kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379152 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" volumeName="kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379381 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379459 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e96ce488-0099-43de-9933-425b7c981055" volumeName="kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379481 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ede583b-44b0-42af-92c9-f7b8938f7843" volumeName="kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379497 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379539 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9bed6748-374e-4d8a-92a0-36d7d735d6b7" volumeName="kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379559 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12352eb-04d7-4419-b1bf-d08bca9da599" volumeName="kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379574 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379592 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" volumeName="kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379608 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379622 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9b02d3c-f671-4850-8c6e-315044a1376c" volumeName="kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379636 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b" volumeName="kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379650 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47dedc5d-1288-4020-b481-5dca68a7d437" volumeName="kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379665 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="762249c6-b548-4733-8b78-64f73430bfbd" volumeName="kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379681 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="762249c6-b548-4733-8b78-64f73430bfbd" volumeName="kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379701 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379719 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8b0e50-59ee-44a9-9a66-8febb833b771" volumeName="kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379737 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379758 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5802841-52dc-4d15-a252-0eac70e9fbbc" volumeName="kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379774 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d134032-1c35-4b69-9336-bcdc9c1cb87d" volumeName="kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379863 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379915 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379930 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.379950 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380003 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf57b864-25d7-4420-9052-04dd580a9f7d" volumeName="kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380021 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2c50f9a-8c73-4cb9-9cbf-2565496212a6" volumeName="kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380038 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380052 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380072 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0a29266-d968-444d-82bb-085ff1d6e506" volumeName="kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380093 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0a29266-d968-444d-82bb-085ff1d6e506" volumeName="kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380107 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d48d286d-4f37-4027-86cd-1580e6076613" volumeName="kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380131 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380147 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a5284f9-cbb7-400b-ab39-bfef60ec198b" volumeName="kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380167 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="898e6c96-73d5-4dc5-a383-986599a5bcd9" volumeName="kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380190 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380207 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06ccd378-23ee-49b7-a435-4b01de772155" volumeName="kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380228 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" volumeName="kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380252 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ff7868e-f0d3-4c63-901f-fed11d623cf1" volumeName="kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380272 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8422281d-af45-4f17-8f15-ac3fd9da4bbc" volumeName="kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380290 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdad149d-da6f-49ac-85e5-deb01f161166" volumeName="kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380309 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54001c8e-cb57-47dc-8594-9daed4190bda" volumeName="kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380323 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380362 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2c50f9a-8c73-4cb9-9cbf-2565496212a6" volumeName="kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380379 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380399 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06ccd378-23ee-49b7-a435-4b01de772155" volumeName="kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380416 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d58817c-970f-47b1-a5a5-a491f3e93426" volumeName="kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380432 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3daf0176-92e7-4642-8643-4afbefb77235" volumeName="kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380447 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380470 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380487 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e941c759-ab95-4b30-a571-6c132ab0e639" volumeName="kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380504 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d134032-1c35-4b69-9336-bcdc9c1cb87d" volumeName="kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380522 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24d878bd-05cd-414e-94c1-a3e9ce637331" volumeName="kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380542 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9bed6748-374e-4d8a-92a0-36d7d735d6b7" volumeName="kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380600 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b459832-b875-49a6-a7c3-253fa6c8e45a" volumeName="kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380622 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" volumeName="kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380647 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380663 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c61886-6cc7-44aa-b56a-81cdcc670993" volumeName="kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380683 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" volumeName="kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380704 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35e97ed9-695d-483e-8878-4f231c79f1d2" volumeName="kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380721 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="762249c6-b548-4733-8b78-64f73430bfbd" volumeName="kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380762 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ea16701-bd22-4fc0-90ea-f114b52574f8" volumeName="kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380785 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380806 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92eaa2e2-61cd-4279-a81f-72db51308148" volumeName="kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380824 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce55de54-8441-4a16-8b57-598042869000" volumeName="kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380843 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5802841-52dc-4d15-a252-0eac70e9fbbc" volumeName="kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380863 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="762249c6-b548-4733-8b78-64f73430bfbd" volumeName="kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380885 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cf2e1eb-fb95-4401-9112-57aee9ebe1e6" volumeName="kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380903 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a544f5a-06b6-4297-a845-d81e9ab9ece7" volumeName="kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380926 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="922e0be5-23c2-481e-89be-e918dc4ce90c" volumeName="kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380944 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57803492-e1dd-4994-8330-1e9b393d54fd" volumeName="kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380963 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d0a976c-1492-4989-a5ff-e386564dd6ba" volumeName="kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.380986 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99f14e64-228f-4b9e-991f-ee398fe7bb8a" volumeName="kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381003 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9bed6748-374e-4d8a-92a0-36d7d735d6b7" volumeName="kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381026 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ccbaed9-ab28-47c0-a585-648b9251fd11" volumeName="kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381046 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" volumeName="kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381070 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="878aa813-a8b9-4a6f-8086-778df276d0d7" volumeName="kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381090 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dc83a57-34c5-4c64-97d3-b6191ba690eb" volumeName="kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381112 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="945907dd-f6b3-400f-b539-e1310eb11dd7" volumeName="kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381138 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" volumeName="kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381156 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73ba4f16-0217-4bf1-8fc2-6b385eda0771" volumeName="kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381197 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8bdbf92-61e3-41e9-a48d-4259cee80e9f" volumeName="kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381217 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" volumeName="kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381240 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdad149d-da6f-49ac-85e5-deb01f161166" volumeName="kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381260 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381281 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f47fa225-93fd-458b-b450-a0411e629afd" volumeName="kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381297 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381312 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ea2b54-bcc2-4c4e-9415-03984721b5b1" volumeName="kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381332 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cadeb05-9298-4bcf-b6f2-659c68eba020" volumeName="kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381366 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cadeb05-9298-4bcf-b6f2-659c68eba020" volumeName="kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381383 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" volumeName="kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381398 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71cb2f21-6d27-411f-9c2f-d5fa286895a7" volumeName="kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381415 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c61886-6cc7-44aa-b56a-81cdcc670993" volumeName="kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381493 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c80f4d-6b28-44f4-beef-01e705260452" volumeName="kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381513 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b459832-b875-49a6-a7c3-253fa6c8e45a" volumeName="kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381530 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" volumeName="kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381549 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" volumeName="kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381565 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7c61886-6cc7-44aa-b56a-81cdcc670993" volumeName="kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381581 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f81886b9-fcd3-4666-b550-0688072210f7" volumeName="kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381599 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ccbaed9-ab28-47c0-a585-648b9251fd11" volumeName="kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381619 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47dedc5d-1288-4020-b481-5dca68a7d437" volumeName="kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381636 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="898e6c96-73d5-4dc5-a383-986599a5bcd9" volumeName="kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381651 26474 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f348bffa-b2f6-4695-88a7-923625e7fb02" volumeName="kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82" seLinuxMountContext="" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381664 26474 reconstruct.go:97] "Volume reconstruction finished" Feb 23 13:14:38.381935 master-0 kubenswrapper[26474]: I0223 13:14:38.381675 26474 reconciler.go:26] "Reconciler: start to sync state" Feb 23 13:14:38.387794 master-0 kubenswrapper[26474]: I0223 13:14:38.387372 26474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 13:14:38.392047 master-0 kubenswrapper[26474]: I0223 13:14:38.391994 26474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 13:14:38.392145 master-0 kubenswrapper[26474]: I0223 13:14:38.392069 26474 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 13:14:38.392145 master-0 kubenswrapper[26474]: I0223 13:14:38.392095 26474 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 13:14:38.392217 master-0 kubenswrapper[26474]: E0223 13:14:38.392149 26474 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 13:14:38.437515 master-0 kubenswrapper[26474]: I0223 13:14:38.437290 26474 generic.go:334] "Generic (PLEG): container finished" podID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerID="530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" exitCode=0 Feb 23 13:14:38.441033 master-0 kubenswrapper[26474]: I0223 13:14:38.440949 26474 generic.go:334] "Generic (PLEG): container finished" podID="71cb2f21-6d27-411f-9c2f-d5fa286895a7" containerID="44ecc8bd157550465c3780c8f90979b8897639b6eed19a94cadcc31f44d1bf1b" exitCode=0 Feb 23 13:14:38.445569 master-0 kubenswrapper[26474]: I0223 13:14:38.445494 26474 generic.go:334] "Generic (PLEG): container finished" podID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerID="7ad0e23958703f89572138a68f4fe4a1db5362d40c0141e67bedc3ac0b588812" exitCode=0 Feb 23 13:14:38.448450 master-0 kubenswrapper[26474]: I0223 13:14:38.448410 26474 generic.go:334] "Generic (PLEG): container finished" podID="f47fa225-93fd-458b-b450-a0411e629afd" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" exitCode=0 Feb 23 13:14:38.453268 master-0 kubenswrapper[26474]: I0223 13:14:38.453206 26474 generic.go:334] "Generic (PLEG): container finished" podID="e96ce488-0099-43de-9933-425b7c981055" containerID="375da7a63fe7d62490c8f3a3e0196f9371b3c8f858218ccbaf2b076eee1f97b8" exitCode=0 Feb 23 13:14:38.453268 master-0 kubenswrapper[26474]: I0223 13:14:38.453250 26474 generic.go:334] "Generic (PLEG): container finished" podID="e96ce488-0099-43de-9933-425b7c981055" containerID="e4825f9df6fa16b0ad9dbf9273e7e948a88d9ef3bae67e20a5a9a1b6ebc14de3" exitCode=0 Feb 23 13:14:38.454471 master-0 kubenswrapper[26474]: E0223 13:14:38.454417 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.461751 master-0 kubenswrapper[26474]: I0223 13:14:38.460247 26474 generic.go:334] "Generic (PLEG): container finished" podID="4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a" containerID="3d99d0c2bd6be47ab909ce0f360a9cd7297541119cb33654550886e7ec757dd2" exitCode=0 Feb 23 13:14:38.467562 master-0 kubenswrapper[26474]: I0223 13:14:38.467479 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-dpxl4_d71885db-c29e-429a-aa1f-1c274796a69f/openshift-controller-manager-operator/1.log" Feb 23 13:14:38.467746 master-0 kubenswrapper[26474]: I0223 13:14:38.467609 26474 generic.go:334] "Generic (PLEG): container finished" podID="d71885db-c29e-429a-aa1f-1c274796a69f" containerID="114dcbc6fdab8f4038f5c9cf10d758f8abe2dd1b2791ef3c7f6e715028e0da39" exitCode=255 Feb 23 13:14:38.471401 master-0 kubenswrapper[26474]: I0223 13:14:38.471354 26474 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="7cf32cc15b30cd0a472deb261e78baeaf04608bdbd83cf83d235fb4d4ea8600c" exitCode=0 Feb 23 13:14:38.471401 master-0 kubenswrapper[26474]: I0223 13:14:38.471389 26474 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="b171809f650608e045a0b142cf5e322c52f3d63ebd75fa84b47d19cacf980b23" exitCode=0 Feb 23 13:14:38.471401 master-0 kubenswrapper[26474]: I0223 13:14:38.471399 26474 generic.go:334] "Generic (PLEG): container finished" podID="29126ab2-a689-4b0e-a1f4-4faed19b0fbc" containerID="0174cb89e2935df02e69f5ff93da85ad9ed1219108156764e83f071d6c6cbae7" exitCode=0 Feb 23 13:14:38.472883 master-0 kubenswrapper[26474]: I0223 13:14:38.472828 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/2.log" Feb 23 13:14:38.473159 master-0 kubenswrapper[26474]: I0223 13:14:38.473113 26474 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="0bc833068d6690bca5e413856c348574733d2a2bf3c4281add21911ff4ffaa94" exitCode=255 Feb 23 13:14:38.473159 master-0 kubenswrapper[26474]: I0223 13:14:38.473145 26474 generic.go:334] "Generic (PLEG): container finished" podID="90a694bb-fe3e-4478-bbb4-d2be9cd4c57f" containerID="9e7f44d5060fdf1a6451fc7abe4dd4b1ac2744ce0d994162e1ca6e694e18353f" exitCode=0 Feb 23 13:14:38.475660 master-0 kubenswrapper[26474]: I0223 13:14:38.475594 26474 generic.go:334] "Generic (PLEG): container finished" podID="3a5284f9-cbb7-400b-ab39-bfef60ec198b" containerID="f09a751c8840d92654361a2adf9d69809898e2c9d64d0549e317c7e2743ed948" exitCode=0 Feb 23 13:14:38.475660 master-0 kubenswrapper[26474]: I0223 13:14:38.475647 26474 generic.go:334] "Generic (PLEG): container finished" podID="3a5284f9-cbb7-400b-ab39-bfef60ec198b" containerID="d24c6d197de6f9706c62ad38004bd20b34f4c8cf1d966f2a08d91932b823f26a" exitCode=0 Feb 23 13:14:38.479904 master-0 kubenswrapper[26474]: I0223 13:14:38.479836 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4239f8be57b6158b6fa0698dec86bee3b9d4f017ada846bb7d788ccf7bd49862" exitCode=0 Feb 23 13:14:38.479904 master-0 kubenswrapper[26474]: I0223 13:14:38.479887 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="d00db22a72ea4aa1ec65791429e5f61e982e0efe4b37e51163034797dd496f23" exitCode=0 Feb 23 13:14:38.481537 master-0 kubenswrapper[26474]: I0223 13:14:38.479900 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="4b8e974553a5805af4feb6c94d4d5c7568b29cb246442dd9b1691b86b9879742" exitCode=0 Feb 23 13:14:38.483606 master-0 kubenswrapper[26474]: I0223 13:14:38.483542 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="0e189812d4682599d3015af9393b7f83c9c7e758eb4c42ea44314281d98f5ef5" exitCode=0 Feb 23 13:14:38.483606 master-0 kubenswrapper[26474]: I0223 13:14:38.483570 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="de633df615dfa1f75b5da48e16feb9f5558220428b4dd98a89433d879af25256" exitCode=0 Feb 23 13:14:38.483606 master-0 kubenswrapper[26474]: I0223 13:14:38.483581 26474 generic.go:334] "Generic (PLEG): container finished" podID="99f14e64-228f-4b9e-991f-ee398fe7bb8a" containerID="c0cd8e6831fa2b7f0a83e05208d92bf5646225429df385d54f2e069a34fbf956" exitCode=0 Feb 23 13:14:38.488263 master-0 kubenswrapper[26474]: I0223 13:14:38.488200 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:14:38.488404 master-0 kubenswrapper[26474]: I0223 13:14:38.488289 26474 generic.go:334] "Generic (PLEG): container finished" podID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerID="886c9563273e1980f3ecc0464372fafeb8a67330b8225928122ba4af2f8bda52" exitCode=1 Feb 23 13:14:38.491810 master-0 kubenswrapper[26474]: I0223 13:14:38.491746 26474 generic.go:334] "Generic (PLEG): container finished" podID="d9b02d3c-f671-4850-8c6e-315044a1376c" containerID="a371a0ec45765fbdd026868b4d9728017f8429bf71f526d8798ec8e60adb809a" exitCode=0 Feb 23 13:14:38.492298 master-0 kubenswrapper[26474]: E0223 13:14:38.492245 26474 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:14:38.498012 master-0 kubenswrapper[26474]: I0223 13:14:38.496511 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/0.log" Feb 23 13:14:38.498012 master-0 kubenswrapper[26474]: I0223 13:14:38.496562 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b" exitCode=137 Feb 23 13:14:38.523517 master-0 kubenswrapper[26474]: I0223 13:14:38.507530 26474 generic.go:334] "Generic (PLEG): container finished" podID="d03a1e6620a92c780b0a91c72a55bc8b" containerID="d17de98c558298cd0c0ce6c4975f377e4c15754cbdbf335c523539dbef081684" exitCode=0 Feb 23 13:14:38.554394 master-0 kubenswrapper[26474]: I0223 13:14:38.552594 26474 generic.go:334] "Generic (PLEG): container finished" podID="f2c50f9a-8c73-4cb9-9cbf-2565496212a6" containerID="ea1eb72990e94dc776f51ab63d27faea76bd89ac6903bb508a9edd4321ae5a8a" exitCode=0 Feb 23 13:14:38.554607 master-0 kubenswrapper[26474]: E0223 13:14:38.554495 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.557374 master-0 kubenswrapper[26474]: I0223 13:14:38.554653 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/3.log" Feb 23 13:14:38.557374 master-0 kubenswrapper[26474]: I0223 13:14:38.554681 26474 generic.go:334] "Generic (PLEG): container finished" podID="5793184d-de96-49ad-a060-0fa0cf278a9c" containerID="52b73a2e4f4ffede944a2f2e078d2bddb0d2196c14d47cf066bca9a42c4e0d7b" exitCode=1 Feb 23 13:14:38.557374 master-0 kubenswrapper[26474]: I0223 13:14:38.556163 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-sj5wd_0d58817c-970f-47b1-a5a5-a491f3e93426/cluster-node-tuning-operator/0.log" Feb 23 13:14:38.557374 master-0 kubenswrapper[26474]: I0223 13:14:38.556185 26474 generic.go:334] "Generic (PLEG): container finished" podID="0d58817c-970f-47b1-a5a5-a491f3e93426" containerID="dc977fa44eb94c7d2786be97eca168973cfb38931e1f243a628741f8ff82c479" exitCode=1 Feb 23 13:14:38.559719 master-0 kubenswrapper[26474]: I0223 13:14:38.559663 26474 generic.go:334] "Generic (PLEG): container finished" podID="4b9d6485-cf67-49c5-99c1-b8582a0bab70" containerID="9e1ed7ebf6d1fa17181b895f05d45d093802e57011b02b870185acec2590ca56" exitCode=0 Feb 23 13:14:38.561621 master-0 kubenswrapper[26474]: I0223 13:14:38.561571 26474 generic.go:334] "Generic (PLEG): container finished" podID="35e97ed9-695d-483e-8878-4f231c79f1d2" containerID="d561b42a38c0b7df53cfb7f78adebe36b09daba8cb18cb5c6854b40cced2e255" exitCode=0 Feb 23 13:14:38.563781 master-0 kubenswrapper[26474]: I0223 13:14:38.563751 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-q7q5x_0d7c1ea0-e3c1-4494-bb27-058200b93ed7/network-operator/0.log" Feb 23 13:14:38.563842 master-0 kubenswrapper[26474]: I0223 13:14:38.563787 26474 generic.go:334] "Generic (PLEG): container finished" podID="0d7c1ea0-e3c1-4494-bb27-058200b93ed7" containerID="eb968c3314cb31b6e0492300e6336271f0112ff545f49715e98a1fe86c9c31d2" exitCode=255 Feb 23 13:14:38.573613 master-0 kubenswrapper[26474]: I0223 13:14:38.573553 26474 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="d2d864a84369989b9d11cae33199d20743ba17dbbbd9594567b6e432600359d1" exitCode=0 Feb 23 13:14:38.574419 master-0 kubenswrapper[26474]: I0223 13:14:38.574338 26474 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="2e255bf3b7625705d5275336ecc4f0432c73e0f5b8fc01e16c0951117a71d88c" exitCode=0 Feb 23 13:14:38.574419 master-0 kubenswrapper[26474]: I0223 13:14:38.574409 26474 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="82ee2b0499ab490936bc4f01ea0b261f0a05bd8f2beb37ede0c37988900d3cbd" exitCode=0 Feb 23 13:14:38.580205 master-0 kubenswrapper[26474]: I0223 13:14:38.580170 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-ql2nl_6ff7868e-f0d3-4c63-901f-fed11d623cf1/manager/0.log" Feb 23 13:14:38.580264 master-0 kubenswrapper[26474]: I0223 13:14:38.580223 26474 generic.go:334] "Generic (PLEG): container finished" podID="6ff7868e-f0d3-4c63-901f-fed11d623cf1" containerID="4cc3ecc5feacb9f931479e4483246f1ec0ef16491cc14ad9cd0c596a2b97f27d" exitCode=1 Feb 23 13:14:38.582443 master-0 kubenswrapper[26474]: I0223 13:14:38.582385 26474 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="d30622693465b0b62d620607efa00658fed43c117d15217ddcd12f4e9ddc2419" exitCode=0 Feb 23 13:14:38.582443 master-0 kubenswrapper[26474]: I0223 13:14:38.582432 26474 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="e83f60b44b83cfd6e3f9aea87eba10757c2f61020bb495edff5a188472446875" exitCode=0 Feb 23 13:14:38.586020 master-0 kubenswrapper[26474]: I0223 13:14:38.585817 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_4bad4fd9-074b-4a4e-8af9-50bdc4be09df/installer/0.log" Feb 23 13:14:38.586090 master-0 kubenswrapper[26474]: I0223 13:14:38.586058 26474 generic.go:334] "Generic (PLEG): container finished" podID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerID="25f5f27094e3f4980d0a60ff68ca18311759b6f56fac8ff763cd8f4150a673af" exitCode=1 Feb 23 13:14:38.656366 master-0 kubenswrapper[26474]: E0223 13:14:38.655975 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.679365 master-0 kubenswrapper[26474]: I0223 13:14:38.673132 26474 generic.go:334] "Generic (PLEG): container finished" podID="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" containerID="11f905712f868b255556777ca3c0a3d839f42a18d6d30988e8ef92608383064b" exitCode=0 Feb 23 13:14:38.679365 master-0 kubenswrapper[26474]: I0223 13:14:38.673186 26474 generic.go:334] "Generic (PLEG): container finished" podID="8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3" containerID="2599e93ebbbf10e3a2918075f4c5d9d7aa6ac90db44d1f03155a36a2b83d2e96" exitCode=0 Feb 23 13:14:38.694431 master-0 kubenswrapper[26474]: I0223 13:14:38.692438 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-z7jgz_0d134032-1c35-4b69-9336-bcdc9c1cb87d/machine-approver-controller/0.log" Feb 23 13:14:38.694431 master-0 kubenswrapper[26474]: E0223 13:14:38.692621 26474 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:14:38.698264 master-0 kubenswrapper[26474]: I0223 13:14:38.698220 26474 generic.go:334] "Generic (PLEG): container finished" podID="0d134032-1c35-4b69-9336-bcdc9c1cb87d" containerID="49844090cc1129b2d843c2317ee9aa9edebd16f2ac5c94c083315778ac1b8f03" exitCode=255 Feb 23 13:14:38.745229 master-0 kubenswrapper[26474]: I0223 13:14:38.741093 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/3.log" Feb 23 13:14:38.745229 master-0 kubenswrapper[26474]: I0223 13:14:38.744905 26474 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736" exitCode=1 Feb 23 13:14:38.745229 master-0 kubenswrapper[26474]: I0223 13:14:38.744962 26474 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563" exitCode=0 Feb 23 13:14:38.752411 master-0 kubenswrapper[26474]: I0223 13:14:38.750460 26474 generic.go:334] "Generic (PLEG): container finished" podID="e0063130-dfb5-4907-a000-f023a77c6441" containerID="b055012e88ad61c2c4ff44365b26ade24e930d1fe63f02496d6b67176e6fe113" exitCode=0 Feb 23 13:14:38.756235 master-0 kubenswrapper[26474]: E0223 13:14:38.756185 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.757140 master-0 kubenswrapper[26474]: I0223 13:14:38.757091 26474 generic.go:334] "Generic (PLEG): container finished" podID="922e0be5-23c2-481e-89be-e918dc4ce90c" containerID="d43691285e17b262ba50eeb68e1eefd1b056cc1972a6de9a447539cd5b864f7e" exitCode=0 Feb 23 13:14:38.772241 master-0 kubenswrapper[26474]: I0223 13:14:38.771823 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757" exitCode=0 Feb 23 13:14:38.777464 master-0 kubenswrapper[26474]: I0223 13:14:38.776887 26474 generic.go:334] "Generic (PLEG): container finished" podID="a663ecaf-ced2-4c7d-91c8-44e94851f7d6" containerID="26a1186ff59907fd2f96cc97b54c6ac88a7c2c4d965d9c749c34381a74f361a9" exitCode=0 Feb 23 13:14:38.785312 master-0 kubenswrapper[26474]: I0223 13:14:38.784865 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/machine-api-operator/0.log" Feb 23 13:14:38.786797 master-0 kubenswrapper[26474]: I0223 13:14:38.786746 26474 generic.go:334] "Generic (PLEG): container finished" podID="47dedc5d-1288-4020-b481-5dca68a7d437" containerID="03c434f6de970d6fadea568234ec0af471fa3dec238b0bd5f6a6179ccb8e7df1" exitCode=255 Feb 23 13:14:38.803210 master-0 kubenswrapper[26474]: I0223 13:14:38.800428 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_283fd2f4-771b-4592-a143-b7e3a5ed6765/installer/0.log" Feb 23 13:14:38.803210 master-0 kubenswrapper[26474]: I0223 13:14:38.800523 26474 generic.go:334] "Generic (PLEG): container finished" podID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerID="698f0709a0bf6365bf7afb4765b93fe2fefc787772f82b5103295a5f25bae796" exitCode=1 Feb 23 13:14:38.814391 master-0 kubenswrapper[26474]: I0223 13:14:38.814308 26474 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab" exitCode=0 Feb 23 13:14:38.814682 master-0 kubenswrapper[26474]: I0223 13:14:38.814668 26474 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8" exitCode=0 Feb 23 13:14:38.814773 master-0 kubenswrapper[26474]: I0223 13:14:38.814753 26474 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0" exitCode=0 Feb 23 13:14:38.859630 master-0 kubenswrapper[26474]: E0223 13:14:38.859561 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.862736 master-0 kubenswrapper[26474]: I0223 13:14:38.862703 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/cluster-autoscaler-operator/0.log" Feb 23 13:14:38.864213 master-0 kubenswrapper[26474]: I0223 13:14:38.864142 26474 generic.go:334] "Generic (PLEG): container finished" podID="bf57b864-25d7-4420-9052-04dd580a9f7d" containerID="d0f028f5c9ba3cbdb9aa71d077d68cd25f9f1bd1f015e402871ed79b04b1c8f3" exitCode=255 Feb 23 13:14:38.865985 master-0 kubenswrapper[26474]: I0223 13:14:38.865959 26474 generic.go:334] "Generic (PLEG): container finished" podID="27c1e327-cb40-4b36-b371-20d1271b8d8d" containerID="6b29af04ef4cfc936396d3b8f81eed64ef8bee70e9754f067615d5e03a3e066c" exitCode=0 Feb 23 13:14:38.867467 master-0 kubenswrapper[26474]: I0223 13:14:38.867439 26474 generic.go:334] "Generic (PLEG): container finished" podID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" exitCode=0 Feb 23 13:14:38.868976 master-0 kubenswrapper[26474]: I0223 13:14:38.868952 26474 generic.go:334] "Generic (PLEG): container finished" podID="3a6b0d84-a344-43e4-b9c4-c8e0670528de" containerID="f41bbfdb7f3332d7cf43817f8495af6ada5a69e9698540f12848e6c0a2e50947" exitCode=0 Feb 23 13:14:38.871321 master-0 kubenswrapper[26474]: I0223 13:14:38.871294 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/cluster-baremetal-operator/0.log" Feb 23 13:14:38.871420 master-0 kubenswrapper[26474]: I0223 13:14:38.871340 26474 generic.go:334] "Generic (PLEG): container finished" podID="5ede583b-44b0-42af-92c9-f7b8938f7843" containerID="9b6793307745f6a85fc70df6b4de715b7748d6182b66009e926d2209513a5af3" exitCode=1 Feb 23 13:14:38.873118 master-0 kubenswrapper[26474]: I0223 13:14:38.873094 26474 generic.go:334] "Generic (PLEG): container finished" podID="ae8b0e50-59ee-44a9-9a66-8febb833b771" containerID="42b787e83faf9100258d1cfdca0f7aae6b32dce8da26a5afef3b43b0d29e85d2" exitCode=0 Feb 23 13:14:38.876131 master-0 kubenswrapper[26474]: I0223 13:14:38.876107 26474 generic.go:334] "Generic (PLEG): container finished" podID="ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9" containerID="bf50e58fb96262a2da0270150de3bc7ed1ff7e9dd4f82079fe11e7f3e00ec9c7" exitCode=0 Feb 23 13:14:38.879908 master-0 kubenswrapper[26474]: I0223 13:14:38.879878 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/2.log" Feb 23 13:14:38.880720 master-0 kubenswrapper[26474]: I0223 13:14:38.880658 26474 generic.go:334] "Generic (PLEG): container finished" podID="878aa813-a8b9-4a6f-8086-778df276d0d7" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" exitCode=1 Feb 23 13:14:38.888770 master-0 kubenswrapper[26474]: I0223 13:14:38.888636 26474 generic.go:334] "Generic (PLEG): container finished" podID="7d0a976c-1492-4989-a5ff-e386564dd6ba" containerID="c355e2c1c4f0e97e7c52c65af1c7679e829d5cd786200eccdf8b33d7cd15372a" exitCode=0 Feb 23 13:14:38.893551 master-0 kubenswrapper[26474]: I0223 13:14:38.893506 26474 generic.go:334] "Generic (PLEG): container finished" podID="affc63b7-db45-429d-82ff-e50f6aae51dc" containerID="41202e9f2790a7f6235a0ce9eb87baca7cb432343b22dcbd777e862cc1562fd9" exitCode=0 Feb 23 13:14:38.895931 master-0 kubenswrapper[26474]: I0223 13:14:38.895901 26474 generic.go:334] "Generic (PLEG): container finished" podID="f348bffa-b2f6-4695-88a7-923625e7fb02" containerID="4bf0acfb1627fed2922b1ade4afb1172158564f4516d958d55b369d98f788765" exitCode=0 Feb 23 13:14:38.898797 master-0 kubenswrapper[26474]: I0223 13:14:38.898757 26474 generic.go:334] "Generic (PLEG): container finished" podID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerID="dd4d5f4a0ab82fe5e433041fcf11c703ce19588ca738c6da0621782807f531c9" exitCode=0 Feb 23 13:14:38.901633 master-0 kubenswrapper[26474]: I0223 13:14:38.901560 26474 generic.go:334] "Generic (PLEG): container finished" podID="24d878bd-05cd-414e-94c1-a3e9ce637331" containerID="31acf0de4b73cbfff55422610e960c624d806171dcec6aaeddd658a636224147" exitCode=0 Feb 23 13:14:38.904410 master-0 kubenswrapper[26474]: I0223 13:14:38.904373 26474 generic.go:334] "Generic (PLEG): container finished" podID="77ea2b54-bcc2-4c4e-9415-03984721b5b1" containerID="40937a497e7a0d08e36a8702283e0f7ae419987db4a94fd11a6d5428287854b0" exitCode=0 Feb 23 13:14:38.907993 master-0 kubenswrapper[26474]: I0223 13:14:38.907939 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-pqjsm_e5802841-52dc-4d15-a252-0eac70e9fbbc/control-plane-machine-set-operator/0.log" Feb 23 13:14:38.908097 master-0 kubenswrapper[26474]: I0223 13:14:38.908057 26474 generic.go:334] "Generic (PLEG): container finished" podID="e5802841-52dc-4d15-a252-0eac70e9fbbc" containerID="a0b82533ef8a23dd50bebab82c0ca8db95bf68be3db11bf32c9c3702f2b24d95" exitCode=1 Feb 23 13:14:38.912762 master-0 kubenswrapper[26474]: I0223 13:14:38.912735 26474 generic.go:334] "Generic (PLEG): container finished" podID="9e0e3072-a35c-4404-891c-f31fafd0b4b1" containerID="3cd52788b3301033e468b721bd7961d3399c0e73da8a5d018cca17858544dc9b" exitCode=0 Feb 23 13:14:38.912762 master-0 kubenswrapper[26474]: I0223 13:14:38.912756 26474 generic.go:334] "Generic (PLEG): container finished" podID="9e0e3072-a35c-4404-891c-f31fafd0b4b1" containerID="d1c0bbd7755a5caeab64bb63934c87f9dbb896e38d0781069ba996be4781a8c9" exitCode=0 Feb 23 13:14:38.920182 master-0 kubenswrapper[26474]: I0223 13:14:38.920150 26474 generic.go:334] "Generic (PLEG): container finished" podID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerID="6cd0275f1f7db307e43c09e5b7b938a05a638192648b348b83255e2e4d8e9eb8" exitCode=0 Feb 23 13:14:38.924093 master-0 kubenswrapper[26474]: I0223 13:14:38.924050 26474 generic.go:334] "Generic (PLEG): container finished" podID="abccfbee-41f4-4557-b953-eb6e719aee31" containerID="d45c58d10778fd4bb86b1fa48d56249170c3cf26b7e64edff21eff2bddff7690" exitCode=0 Feb 23 13:14:38.926145 master-0 kubenswrapper[26474]: I0223 13:14:38.926122 26474 generic.go:334] "Generic (PLEG): container finished" podID="92eaa2e2-61cd-4279-a81f-72db51308148" containerID="2e40109d34052395c159362b1fc60377679fbb682b53af5d56f614bb5eac078e" exitCode=0 Feb 23 13:14:38.927961 master-0 kubenswrapper[26474]: I0223 13:14:38.927930 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/manager/0.log" Feb 23 13:14:38.928211 master-0 kubenswrapper[26474]: I0223 13:14:38.928181 26474 generic.go:334] "Generic (PLEG): container finished" podID="fce9f67d-0b27-41e3-ba4c-ed9cca25703e" containerID="d249745523695601f887a8698e1ad99347f7c0f390b57c191ff627979ced32b8" exitCode=1 Feb 23 13:14:38.929119 master-0 kubenswrapper[26474]: I0223 13:14:38.929099 26474 generic.go:334] "Generic (PLEG): container finished" podID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerID="0430bea27e87ced3ed24fee214199f1e9afd86d96157c7f7feb638bd03a355f0" exitCode=0 Feb 23 13:14:38.947478 master-0 kubenswrapper[26474]: I0223 13:14:38.947394 26474 generic.go:334] "Generic (PLEG): container finished" podID="540b41b0-f574-46b9-8b2f-19e90ad5d0ce" containerID="3dce0cc5f97bf43d2b56ee91d574aa374ea8564835a1d8988f603b6c0033063a" exitCode=0 Feb 23 13:14:38.952157 master-0 kubenswrapper[26474]: I0223 13:14:38.952124 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-zr6kv_18386753-ec74-456d-838d-98c07c169b4b/approver/0.log" Feb 23 13:14:38.952471 master-0 kubenswrapper[26474]: I0223 13:14:38.952426 26474 generic.go:334] "Generic (PLEG): container finished" podID="18386753-ec74-456d-838d-98c07c169b4b" containerID="d01166f75613e8876ca557628e42fc7b26709f163770565d233c3c09b10f65ff" exitCode=1 Feb 23 13:14:38.954851 master-0 kubenswrapper[26474]: I0223 13:14:38.954819 26474 generic.go:334] "Generic (PLEG): container finished" podID="57803492-e1dd-4994-8330-1e9b393d54fd" containerID="dee213f15416abb9ebf800c43fce607fa7ba3b3cfee07ca0fa563630c117e685" exitCode=0 Feb 23 13:14:38.957617 master-0 kubenswrapper[26474]: I0223 13:14:38.957579 26474 generic.go:334] "Generic (PLEG): container finished" podID="3daf0176-92e7-4642-8643-4afbefb77235" containerID="f205f47da789bb0655eaefd3fc629901d18927b18577bd859aed40fe66e3e22f" exitCode=0 Feb 23 13:14:38.959910 master-0 kubenswrapper[26474]: E0223 13:14:38.959872 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:38.960638 master-0 kubenswrapper[26474]: I0223 13:14:38.960614 26474 generic.go:334] "Generic (PLEG): container finished" podID="d7c80f4d-6b28-44f4-beef-01e705260452" containerID="a12c9e7dba4505df30d1171e23f416e511ae32af8b1117ea50805030fe947775" exitCode=0 Feb 23 13:14:39.060571 master-0 kubenswrapper[26474]: E0223 13:14:39.060467 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.093851 master-0 kubenswrapper[26474]: E0223 13:14:39.093767 26474 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:14:39.161451 master-0 kubenswrapper[26474]: E0223 13:14:39.161363 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.262742 master-0 kubenswrapper[26474]: E0223 13:14:39.262626 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.362855 master-0 kubenswrapper[26474]: E0223 13:14:39.362772 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.464056 master-0 kubenswrapper[26474]: E0223 13:14:39.463946 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.564920 master-0 kubenswrapper[26474]: E0223 13:14:39.564723 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.665982 master-0 kubenswrapper[26474]: E0223 13:14:39.665891 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.766967 master-0 kubenswrapper[26474]: E0223 13:14:39.766877 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.868003 master-0 kubenswrapper[26474]: E0223 13:14:39.867843 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:39.894240 master-0 kubenswrapper[26474]: E0223 13:14:39.894131 26474 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:14:39.968087 master-0 kubenswrapper[26474]: E0223 13:14:39.968007 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.068782 master-0 kubenswrapper[26474]: E0223 13:14:40.068696 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.169274 master-0 kubenswrapper[26474]: E0223 13:14:40.169126 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.270074 master-0 kubenswrapper[26474]: E0223 13:14:40.269992 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.371060 master-0 kubenswrapper[26474]: E0223 13:14:40.370941 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.472016 master-0 kubenswrapper[26474]: E0223 13:14:40.471790 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.572900 master-0 kubenswrapper[26474]: E0223 13:14:40.572783 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.673699 master-0 kubenswrapper[26474]: E0223 13:14:40.673590 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.774418 master-0 kubenswrapper[26474]: E0223 13:14:40.774299 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.875461 master-0 kubenswrapper[26474]: E0223 13:14:40.874676 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.975314 master-0 kubenswrapper[26474]: E0223 13:14:40.975202 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:40.987104 master-0 kubenswrapper[26474]: I0223 13:14:40.987020 26474 generic.go:334] "Generic (PLEG): container finished" podID="73ba4f16-0217-4bf1-8fc2-6b385eda0771" containerID="cf7e22147b726d7bb900d92e5a79955383f2346325db290ec3e45f21c5be3266" exitCode=0 Feb 23 13:14:41.076421 master-0 kubenswrapper[26474]: E0223 13:14:41.076329 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:41.176842 master-0 kubenswrapper[26474]: E0223 13:14:41.176781 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:41.184091 master-0 kubenswrapper[26474]: I0223 13:14:41.184045 26474 manager.go:324] Recovery completed Feb 23 13:14:41.277764 master-0 kubenswrapper[26474]: E0223 13:14:41.277642 26474 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 13:14:41.290945 master-0 kubenswrapper[26474]: I0223 13:14:41.290884 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.295333 master-0 kubenswrapper[26474]: I0223 13:14:41.295272 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.295333 master-0 kubenswrapper[26474]: I0223 13:14:41.295330 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.295333 master-0 kubenswrapper[26474]: I0223 13:14:41.295374 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.304086 master-0 kubenswrapper[26474]: I0223 13:14:41.303809 26474 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 13:14:41.304086 master-0 kubenswrapper[26474]: I0223 13:14:41.303855 26474 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 13:14:41.304086 master-0 kubenswrapper[26474]: I0223 13:14:41.303905 26474 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:14:41.304278 master-0 kubenswrapper[26474]: I0223 13:14:41.304173 26474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 23 13:14:41.304278 master-0 kubenswrapper[26474]: I0223 13:14:41.304191 26474 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 23 13:14:41.304278 master-0 kubenswrapper[26474]: I0223 13:14:41.304217 26474 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 23 13:14:41.304278 master-0 kubenswrapper[26474]: I0223 13:14:41.304228 26474 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 23 13:14:41.304278 master-0 kubenswrapper[26474]: I0223 13:14:41.304238 26474 policy_none.go:49] "None policy: Start" Feb 23 13:14:41.313475 master-0 kubenswrapper[26474]: I0223 13:14:41.313428 26474 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 13:14:41.313634 master-0 kubenswrapper[26474]: I0223 13:14:41.313512 26474 state_mem.go:35] "Initializing new in-memory state store" Feb 23 13:14:41.313865 master-0 kubenswrapper[26474]: I0223 13:14:41.313840 26474 state_mem.go:75] "Updated machine memory state" Feb 23 13:14:41.313865 master-0 kubenswrapper[26474]: I0223 13:14:41.313855 26474 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 23 13:14:41.332779 master-0 kubenswrapper[26474]: I0223 13:14:41.332654 26474 manager.go:334] "Starting Device Plugin manager" Feb 23 13:14:41.332779 master-0 kubenswrapper[26474]: I0223 13:14:41.332731 26474 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 13:14:41.332779 master-0 kubenswrapper[26474]: I0223 13:14:41.332743 26474 server.go:79] "Starting device plugin registration server" Feb 23 13:14:41.333236 master-0 kubenswrapper[26474]: I0223 13:14:41.333207 26474 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 13:14:41.333287 master-0 kubenswrapper[26474]: I0223 13:14:41.333225 26474 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 13:14:41.333871 master-0 kubenswrapper[26474]: I0223 13:14:41.333840 26474 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 13:14:41.333931 master-0 kubenswrapper[26474]: I0223 13:14:41.333916 26474 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 13:14:41.333931 master-0 kubenswrapper[26474]: I0223 13:14:41.333924 26474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 13:14:41.342679 master-0 kubenswrapper[26474]: E0223 13:14:41.342647 26474 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 13:14:41.434468 master-0 kubenswrapper[26474]: I0223 13:14:41.434378 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.437529 master-0 kubenswrapper[26474]: I0223 13:14:41.437477 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.437596 master-0 kubenswrapper[26474]: I0223 13:14:41.437542 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.437596 master-0 kubenswrapper[26474]: I0223 13:14:41.437556 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.437596 master-0 kubenswrapper[26474]: I0223 13:14:41.437586 26474 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:14:41.441368 master-0 kubenswrapper[26474]: E0223 13:14:41.441318 26474 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:14:41.494941 master-0 kubenswrapper[26474]: I0223 13:14:41.494837 26474 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:14:41.495168 master-0 kubenswrapper[26474]: I0223 13:14:41.494969 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.502183 master-0 kubenswrapper[26474]: I0223 13:14:41.502139 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.502183 master-0 kubenswrapper[26474]: I0223 13:14:41.502178 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.502183 master-0 kubenswrapper[26474]: I0223 13:14:41.502189 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.502375 master-0 kubenswrapper[26474]: I0223 13:14:41.502294 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.502888 master-0 kubenswrapper[26474]: I0223 13:14:41.502781 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.505194 master-0 kubenswrapper[26474]: I0223 13:14:41.505162 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.505194 master-0 kubenswrapper[26474]: I0223 13:14:41.505188 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.505301 master-0 kubenswrapper[26474]: I0223 13:14:41.505202 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.505301 master-0 kubenswrapper[26474]: I0223 13:14:41.505268 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.505500 master-0 kubenswrapper[26474]: I0223 13:14:41.505455 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.508199 master-0 kubenswrapper[26474]: I0223 13:14:41.508140 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.508199 master-0 kubenswrapper[26474]: I0223 13:14:41.508199 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.508300 master-0 kubenswrapper[26474]: I0223 13:14:41.508211 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.508867 master-0 kubenswrapper[26474]: I0223 13:14:41.508840 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.508867 master-0 kubenswrapper[26474]: I0223 13:14:41.508863 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.508867 master-0 kubenswrapper[26474]: I0223 13:14:41.508872 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.509033 master-0 kubenswrapper[26474]: I0223 13:14:41.509011 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.509187 master-0 kubenswrapper[26474]: I0223 13:14:41.509156 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.509275 master-0 kubenswrapper[26474]: I0223 13:14:41.509167 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.509384 master-0 kubenswrapper[26474]: I0223 13:14:41.509369 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.509482 master-0 kubenswrapper[26474]: I0223 13:14:41.509469 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.511862 master-0 kubenswrapper[26474]: I0223 13:14:41.511840 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.511862 master-0 kubenswrapper[26474]: I0223 13:14:41.511860 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.511862 master-0 kubenswrapper[26474]: I0223 13:14:41.511869 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.512016 master-0 kubenswrapper[26474]: I0223 13:14:41.511914 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.512016 master-0 kubenswrapper[26474]: I0223 13:14:41.511949 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.512016 master-0 kubenswrapper[26474]: I0223 13:14:41.511959 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.512016 master-0 kubenswrapper[26474]: I0223 13:14:41.511962 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.512363 master-0 kubenswrapper[26474]: I0223 13:14:41.512328 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.514125 master-0 kubenswrapper[26474]: I0223 13:14:41.514092 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.514187 master-0 kubenswrapper[26474]: I0223 13:14:41.514131 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.514187 master-0 kubenswrapper[26474]: I0223 13:14:41.514146 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.514880 master-0 kubenswrapper[26474]: I0223 13:14:41.514846 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.515024 master-0 kubenswrapper[26474]: I0223 13:14:41.515000 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.515072 master-0 kubenswrapper[26474]: I0223 13:14:41.515029 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.515072 master-0 kubenswrapper[26474]: I0223 13:14:41.515038 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.515247 master-0 kubenswrapper[26474]: I0223 13:14:41.515216 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.517711 master-0 kubenswrapper[26474]: I0223 13:14:41.517676 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.517711 master-0 kubenswrapper[26474]: I0223 13:14:41.517711 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.517845 master-0 kubenswrapper[26474]: I0223 13:14:41.517726 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.517972 master-0 kubenswrapper[26474]: I0223 13:14:41.517926 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7c1f8a336dc688c688a82a7743a54d6258545018b3b12e6aea371fdcda658c" Feb 23 13:14:41.518020 master-0 kubenswrapper[26474]: I0223 13:14:41.517987 26474 scope.go:117] "RemoveContainer" containerID="530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" Feb 23 13:14:41.518218 master-0 kubenswrapper[26474]: I0223 13:14:41.518147 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa3a18ca035e1c8a4a8e4c55ea1292328496404fe053666f3bd40c3fd5062f7" Feb 23 13:14:41.518299 master-0 kubenswrapper[26474]: I0223 13:14:41.518216 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71"} Feb 23 13:14:41.518362 master-0 kubenswrapper[26474]: I0223 13:14:41.518307 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"225b72ffe810de606c91db96bda704162eba140695b0d114f42ad9b5f7338027"} Feb 23 13:14:41.518362 master-0 kubenswrapper[26474]: I0223 13:14:41.518325 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"318802d3ffb8951642b7de6e2fcdce57f2f19df5bc9dbb49de74dac1fb692661"} Feb 23 13:14:41.518442 master-0 kubenswrapper[26474]: I0223 13:14:41.518366 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"6d1d2a690e1d1c47fa4cec1c840fe9083bc8bf1097a1a9a0b84ede40886e22da"} Feb 23 13:14:41.518442 master-0 kubenswrapper[26474]: I0223 13:14:41.518386 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerDied","Data":"2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b"} Feb 23 13:14:41.518442 master-0 kubenswrapper[26474]: I0223 13:14:41.518406 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584"} Feb 23 13:14:41.518442 master-0 kubenswrapper[26474]: I0223 13:14:41.518425 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"c95d7390704ea251dc7769d1855655b41d14a57055810b87414232733e52ca76"} Feb 23 13:14:41.518553 master-0 kubenswrapper[26474]: I0223 13:14:41.518445 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"91687f767ec0a818591899bcd9752bd7650e8ae309c3b19533204110ae03e018"} Feb 23 13:14:41.518553 master-0 kubenswrapper[26474]: I0223 13:14:41.518463 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"c7094c932ff3ee165f80299c697532f08bd592736188ecad774f00acf21ea126"} Feb 23 13:14:41.518553 master-0 kubenswrapper[26474]: I0223 13:14:41.518483 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerDied","Data":"d17de98c558298cd0c0ce6c4975f377e4c15754cbdbf335c523539dbef081684"} Feb 23 13:14:41.518553 master-0 kubenswrapper[26474]: I0223 13:14:41.518501 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"968a9822c87f73e9559c28309a177baff6729af2cf700098ba1888ec0387b7bc"} Feb 23 13:14:41.518553 master-0 kubenswrapper[26474]: I0223 13:14:41.518522 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518561 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"4f359c660906c23bf880e9290f1f8922442dc88b951f15e1fd0f3a2beaf307ff"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518577 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"42c072ce8addb2af8e54c2540d3d7bf94b7e1c11c8b5b3516735dd9bc3b16010"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518590 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"50a58d0ce7c894ef63637afa134bce96bf1f006a6bab4ac3f1ecd56d9d50fb4c"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518602 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"890a57877b523717f0581cb46be8e5e3ffe6e394eb800ce34a2eccd8a9ed9c26"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518614 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"d240edda552a194cc839e519a3ef6f597dac970b89cedcf002a5ca19e1dccea4"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518626 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"d2d864a84369989b9d11cae33199d20743ba17dbbbd9594567b6e432600359d1"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518641 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"2e255bf3b7625705d5275336ecc4f0432c73e0f5b8fc01e16c0951117a71d88c"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518654 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"82ee2b0499ab490936bc4f01ea0b261f0a05bd8f2beb37ede0c37988900d3cbd"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518666 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"047da142cf199754ddf37417b491aa94e635780094c0890acb8879faf9433391"} Feb 23 13:14:41.518693 master-0 kubenswrapper[26474]: I0223 13:14:41.518698 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b03398c7c5342531ea65126f53e9604327adfe194442ab3309f39be1e15bbf7" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518714 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a7eb5dd9ce527b37e8cbbeeed3ebf6bd149269a0623c131d3e7f8e71c4f12f" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518464 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518749 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"1145acd6528f641fe4dba004ec108b22fd6a9f58b87118602acd22f6be1e6680"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518755 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518763 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"1ea8c4763be77e6f7280bc0b0065d2af3ca149a32b691e02da59a35b9a221736"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518776 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"d48411ed762843923134a92bcee0b4ce878e0a6398d43a3652f882b30f64b563"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518789 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"dca0d5d23c79d3b7cc72be9bce1a00fd8c8bc108507e4825b3a2cc14febf8271"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518803 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451186e3bd087f4e2a317072e4c098e400af909a2727bddeb8b4a06743ad2510" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518825 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518766 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518838 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518852 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518863 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518875 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518888 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"6244c01d47d261c3397fd1d23da4ef09fefd7e2aec48680428b0aeff62e0f579"} Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518919 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbee83a28e4e85b2d4891dc24855eb2cc6165c6448a3273aa5f8a3ec8e2cf444" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518936 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b4fe131873538fe61511e37dc117788a104dcaa0de563054d4cdc1ee0dfb72" Feb 23 13:14:41.518968 master-0 kubenswrapper[26474]: I0223 13:14:41.518971 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9da06ee23b6cb8b9623f7a51ebd1e82f9f88d5c18ad94ee1191bb007985ffa" Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519060 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f8d85ad085a2df67368f809c78552ac79db7bb7c6a318c3cb36dbd40dda7af" Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519118 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109b623b5a1ea0fcc0a5a5fd7d747c9ee8a3d9d901c40db77e82589e69041e94" Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519131 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce34e8f56df01165d847f6464e77360e3f0978547ad68a6025ff1d62dabfaac" Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519159 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a59a96ae4654f36cd7044dc91477d79119c89e04c37a4bf1eb93ffbac15b813" Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519181 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71"} Feb 23 13:14:41.519526 master-0 kubenswrapper[26474]: I0223 13:14:41.519195 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"052c44c6601541168da6658fd684e918c275c792bb8a5e698af0c5869ee863d3"} Feb 23 13:14:41.523067 master-0 kubenswrapper[26474]: I0223 13:14:41.523006 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.523067 master-0 kubenswrapper[26474]: I0223 13:14:41.523035 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.523067 master-0 kubenswrapper[26474]: I0223 13:14:41.523044 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.544876 master-0 kubenswrapper[26474]: I0223 13:14:41.544783 26474 scope.go:117] "RemoveContainer" containerID="530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" Feb 23 13:14:41.545814 master-0 kubenswrapper[26474]: E0223 13:14:41.545782 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88\": container with ID starting with 530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88 not found: ID does not exist" containerID="530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88" Feb 23 13:14:41.545886 master-0 kubenswrapper[26474]: I0223 13:14:41.545816 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88"} err="failed to get container status \"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88\": rpc error: code = NotFound desc = could not find container \"530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88\": container with ID starting with 530e18c176bc3b3d1b200f2a5bad310560433f2ac5a05e0c7042ab16ba25bd88 not found: ID does not exist" Feb 23 13:14:41.643610 master-0 kubenswrapper[26474]: I0223 13:14:41.642369 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:41.649121 master-0 kubenswrapper[26474]: I0223 13:14:41.649045 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:41.649179 master-0 kubenswrapper[26474]: I0223 13:14:41.649137 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:41.649179 master-0 kubenswrapper[26474]: I0223 13:14:41.649159 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:41.649246 master-0 kubenswrapper[26474]: I0223 13:14:41.649200 26474 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:14:41.653012 master-0 kubenswrapper[26474]: E0223 13:14:41.652956 26474 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:14:42.053668 master-0 kubenswrapper[26474]: I0223 13:14:42.053558 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:42.057354 master-0 kubenswrapper[26474]: I0223 13:14:42.057293 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:42.057432 master-0 kubenswrapper[26474]: I0223 13:14:42.057401 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:42.057432 master-0 kubenswrapper[26474]: I0223 13:14:42.057413 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:42.057495 master-0 kubenswrapper[26474]: I0223 13:14:42.057438 26474 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:14:42.062219 master-0 kubenswrapper[26474]: E0223 13:14:42.062143 26474 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:14:42.862664 master-0 kubenswrapper[26474]: I0223 13:14:42.862559 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:42.866797 master-0 kubenswrapper[26474]: I0223 13:14:42.866726 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:42.866797 master-0 kubenswrapper[26474]: I0223 13:14:42.866788 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:42.866797 master-0 kubenswrapper[26474]: I0223 13:14:42.866807 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:42.867046 master-0 kubenswrapper[26474]: I0223 13:14:42.866839 26474 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:14:42.872218 master-0 kubenswrapper[26474]: E0223 13:14:42.872152 26474 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:14:43.690200 master-0 kubenswrapper[26474]: I0223 13:14:43.690128 26474 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:14:43.691149 master-0 kubenswrapper[26474]: I0223 13:14:43.691091 26474 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:14:43.694994 master-0 kubenswrapper[26474]: I0223 13:14:43.693257 26474 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:14:43.694994 master-0 kubenswrapper[26474]: I0223 13:14:43.693645 26474 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 13:14:43.696231 master-0 kubenswrapper[26474]: I0223 13:14:43.696076 26474 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:14:43.794272 master-0 kubenswrapper[26474]: I0223 13:14:43.794207 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.794538 master-0 kubenswrapper[26474]: I0223 13:14:43.794329 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.794538 master-0 kubenswrapper[26474]: I0223 13:14:43.794419 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.794538 master-0 kubenswrapper[26474]: I0223 13:14:43.794452 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.794538 master-0 kubenswrapper[26474]: I0223 13:14:43.794484 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794575 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794626 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794651 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794669 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794684 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794700 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.794779 master-0 kubenswrapper[26474]: I0223 13:14:43.794760 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794806 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794833 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794867 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794886 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794900 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794920 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794935 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.795037 master-0 kubenswrapper[26474]: I0223 13:14:43.794952 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.895406 master-0 kubenswrapper[26474]: I0223 13:14:43.895161 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.895406 master-0 kubenswrapper[26474]: I0223 13:14:43.895204 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.895406 master-0 kubenswrapper[26474]: I0223 13:14:43.895249 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.895406 master-0 kubenswrapper[26474]: I0223 13:14:43.895285 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895450 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895530 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895706 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895774 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895807 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895851 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895867 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895883 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895899 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895914 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895928 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.895947 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896143 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896160 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896175 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896190 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896206 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896220 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896239 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896265 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896286 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896309 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896330 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896377 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896403 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896424 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896443 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.896432 master-0 kubenswrapper[26474]: I0223 13:14:43.896471 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896503 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896527 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896550 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896569 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896587 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896605 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.898108 master-0 kubenswrapper[26474]: I0223 13:14:43.896625 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.920036 master-0 kubenswrapper[26474]: I0223 13:14:43.919971 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.923619 master-0 kubenswrapper[26474]: I0223 13:14:43.923579 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.923695 master-0 kubenswrapper[26474]: I0223 13:14:43.923653 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:43.935377 master-0 kubenswrapper[26474]: I0223 13:14:43.935296 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:44.033148 master-0 kubenswrapper[26474]: E0223 13:14:44.033075 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:44.033721 master-0 kubenswrapper[26474]: E0223 13:14:44.033688 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:14:44.033959 master-0 kubenswrapper[26474]: E0223 13:14:44.033928 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:44.034368 master-0 kubenswrapper[26474]: E0223 13:14:44.034319 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:14:44.038681 master-0 kubenswrapper[26474]: E0223 13:14:44.038020 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:44.209871 master-0 kubenswrapper[26474]: I0223 13:14:44.209811 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:44.241367 master-0 kubenswrapper[26474]: I0223 13:14:44.236897 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:44.318667 master-0 kubenswrapper[26474]: I0223 13:14:44.318567 26474 apiserver.go:52] "Watching apiserver" Feb 23 13:14:44.341728 master-0 kubenswrapper[26474]: I0223 13:14:44.341605 26474 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:14:44.345295 master-0 kubenswrapper[26474]: I0223 13:14:44.345199 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0","openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr","openshift-monitoring/metrics-server-69f7f878d4-746vx","openshift-multus/multus-additional-cni-plugins-srlm4","openshift-network-operator/network-operator-7d7db75979-q7q5x","openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p","openshift-insights/insights-operator-59b498fcfb-sswng","openshift-monitoring/prometheus-operator-754bc4d665-2ksrm","openshift-ovn-kubernetes/ovnkube-node-qz8dt","openshift-apiserver/apiserver-9f44475c9-drjp5","openshift-cluster-node-tuning-operator/tuned-mjpd9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h","openshift-controller-manager/controller-manager-69f44bb786-4zj6n","openshift-dns-operator/dns-operator-8c7d49845-g8fdn","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr","openshift-kube-controller-manager/installer-4-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-marketplace/marketplace-operator-6f5488b997-588zk","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq","openshift-ingress/router-default-7b65dc9fcb-kcfgf","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-marketplace/certified-operators-vnmk2","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g","openshift-kube-apiserver/installer-3-master-0","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg","openshift-network-node-identity/network-node-identity-zr6kv","openshift-ingress-operator/ingress-operator-6569778c84-k9h69","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl","openshift-service-ca/service-ca-576b4d78bd-9pltw","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm","openshift-monitoring/kube-state-metrics-59584d565f-r66qv","openshift-monitoring/node-exporter-tv6s2","openshift-multus/network-metrics-daemon-bbrcr","openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn","openshift-marketplace/community-operators-w7wq9","openshift-network-operator/iptables-alerter-qg27h","openshift-kube-apiserver/installer-1-master-0","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz","assisted-installer/assisted-installer-controller-nktl9","openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4","openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl","openshift-machine-config-operator/machine-config-daemon-q8bjq","openshift-marketplace/redhat-marketplace-vwhpv","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t","openshift-cluster-version/cluster-version-operator-57476485-8jbxf","openshift-etcd/installer-1-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv","openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4","openshift-dns/dns-default-ljphn","openshift-dns/node-resolver-rxc8b","openshift-marketplace/redhat-operators-zrtmg","openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh","openshift-kube-scheduler/installer-5-master-0","openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th","openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8","openshift-network-diagnostics/network-check-target-rnz52","openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2","openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf","openshift-etcd/etcd-master-0","openshift-machine-api/machine-api-operator-5c7cf458b4-nm845","openshift-machine-config-operator/machine-config-server-97rhg","openshift-multus/multus-6lk7x","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w"] Feb 23 13:14:44.346205 master-0 kubenswrapper[26474]: I0223 13:14:44.346165 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-nktl9" Feb 23 13:14:44.356434 master-0 kubenswrapper[26474]: I0223 13:14:44.354134 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.356434 master-0 kubenswrapper[26474]: I0223 13:14:44.354561 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.356434 master-0 kubenswrapper[26474]: I0223 13:14:44.355121 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.356967 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.357789 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.357822 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.357939 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.358112 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.358114 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.358222 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.358488 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.358751 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:14:44.359824 master-0 kubenswrapper[26474]: I0223 13:14:44.359009 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:14:44.370785 master-0 kubenswrapper[26474]: I0223 13:14:44.370728 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:14:44.371138 master-0 kubenswrapper[26474]: I0223 13:14:44.371122 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:14:44.371310 master-0 kubenswrapper[26474]: I0223 13:14:44.371269 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:14:44.371474 master-0 kubenswrapper[26474]: I0223 13:14:44.371457 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.371763 master-0 kubenswrapper[26474]: I0223 13:14:44.371688 26474 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3f1d2c44-e357-47c1-928f-bf14fa7a53e2" Feb 23 13:14:44.372641 master-0 kubenswrapper[26474]: I0223 13:14:44.372620 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.372832 master-0 kubenswrapper[26474]: I0223 13:14:44.372776 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:14:44.378685 master-0 kubenswrapper[26474]: I0223 13:14:44.377661 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:14:44.382245 master-0 kubenswrapper[26474]: I0223 13:14:44.382199 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:14:44.382304 master-0 kubenswrapper[26474]: I0223 13:14:44.382204 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.382511 master-0 kubenswrapper[26474]: I0223 13:14:44.382459 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:14:44.382690 master-0 kubenswrapper[26474]: I0223 13:14:44.382669 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:14:44.382774 master-0 kubenswrapper[26474]: I0223 13:14:44.382747 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:14:44.382774 master-0 kubenswrapper[26474]: I0223 13:14:44.382763 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:14:44.382849 master-0 kubenswrapper[26474]: I0223 13:14:44.382783 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:14:44.382881 master-0 kubenswrapper[26474]: I0223 13:14:44.382857 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:14:44.382913 master-0 kubenswrapper[26474]: I0223 13:14:44.382887 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:14:44.382989 master-0 kubenswrapper[26474]: I0223 13:14:44.382966 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:14:44.383170 master-0 kubenswrapper[26474]: I0223 13:14:44.383133 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:14:44.383408 master-0 kubenswrapper[26474]: I0223 13:14:44.383299 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:14:44.383628 master-0 kubenswrapper[26474]: I0223 13:14:44.383386 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.383628 master-0 kubenswrapper[26474]: I0223 13:14:44.383581 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:14:44.383837 master-0 kubenswrapper[26474]: I0223 13:14:44.383801 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:14:44.383912 master-0 kubenswrapper[26474]: I0223 13:14:44.383892 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:14:44.383954 master-0 kubenswrapper[26474]: I0223 13:14:44.383918 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:14:44.383954 master-0 kubenswrapper[26474]: I0223 13:14:44.383921 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.384610 master-0 kubenswrapper[26474]: I0223 13:14:44.384559 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:14:44.385398 master-0 kubenswrapper[26474]: I0223 13:14:44.385375 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:14:44.385732 master-0 kubenswrapper[26474]: I0223 13:14:44.385661 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:14:44.385732 master-0 kubenswrapper[26474]: I0223 13:14:44.385700 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:14:44.387023 master-0 kubenswrapper[26474]: I0223 13:14:44.386708 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:14:44.387023 master-0 kubenswrapper[26474]: I0223 13:14:44.386863 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:14:44.396358 master-0 kubenswrapper[26474]: I0223 13:14:44.396234 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:14:44.397411 master-0 kubenswrapper[26474]: I0223 13:14:44.397358 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:14:44.397588 master-0 kubenswrapper[26474]: I0223 13:14:44.397537 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:14:44.397872 master-0 kubenswrapper[26474]: I0223 13:14:44.397836 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:14:44.398047 master-0 kubenswrapper[26474]: I0223 13:14:44.398013 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.398404 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.398514 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.398614 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.398711 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.398630 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:14:44.399030 master-0 kubenswrapper[26474]: I0223 13:14:44.399029 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:14:44.399689 master-0 kubenswrapper[26474]: I0223 13:14:44.399617 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.399759 master-0 kubenswrapper[26474]: I0223 13:14:44.399709 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.399800 master-0 kubenswrapper[26474]: I0223 13:14:44.399774 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.399856 master-0 kubenswrapper[26474]: I0223 13:14:44.399827 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.399928 master-0 kubenswrapper[26474]: I0223 13:14:44.399884 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:14:44.399975 master-0 kubenswrapper[26474]: I0223 13:14:44.399942 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.400032 master-0 kubenswrapper[26474]: I0223 13:14:44.399988 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.400122 master-0 kubenswrapper[26474]: I0223 13:14:44.400081 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.400216 master-0 kubenswrapper[26474]: I0223 13:14:44.400158 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:14:44.400282 master-0 kubenswrapper[26474]: I0223 13:14:44.400222 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.400282 master-0 kubenswrapper[26474]: I0223 13:14:44.400267 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.400381 master-0 kubenswrapper[26474]: I0223 13:14:44.400327 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.400428 master-0 kubenswrapper[26474]: I0223 13:14:44.400407 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.400483 master-0 kubenswrapper[26474]: I0223 13:14:44.400462 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.400544 master-0 kubenswrapper[26474]: I0223 13:14:44.400510 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.401107 master-0 kubenswrapper[26474]: I0223 13:14:44.401058 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-serving-cert\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.402519 master-0 kubenswrapper[26474]: I0223 13:14:44.402475 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71885db-c29e-429a-aa1f-1c274796a69f-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.406502 master-0 kubenswrapper[26474]: I0223 13:14:44.406040 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-metrics-tls\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:14:44.406502 master-0 kubenswrapper[26474]: I0223 13:14:44.406315 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-config\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.406798 master-0 kubenswrapper[26474]: I0223 13:14:44.406608 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.406922 master-0 kubenswrapper[26474]: I0223 13:14:44.406875 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:14:44.407531 master-0 kubenswrapper[26474]: I0223 13:14:44.407478 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:14:44.407871 master-0 kubenswrapper[26474]: I0223 13:14:44.407831 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:14:44.408261 master-0 kubenswrapper[26474]: I0223 13:14:44.408217 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:14:44.408378 master-0 kubenswrapper[26474]: I0223 13:14:44.408366 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:14:44.408507 master-0 kubenswrapper[26474]: I0223 13:14:44.408462 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:14:44.408889 master-0 kubenswrapper[26474]: I0223 13:14:44.408821 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-ca\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.408987 master-0 kubenswrapper[26474]: I0223 13:14:44.408917 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71885db-c29e-429a-aa1f-1c274796a69f-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.408987 master-0 kubenswrapper[26474]: I0223 13:14:44.408982 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:14:44.409104 master-0 kubenswrapper[26474]: I0223 13:14:44.409035 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:14:44.409171 master-0 kubenswrapper[26474]: I0223 13:14:44.409145 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:14:44.409818 master-0 kubenswrapper[26474]: I0223 13:14:44.409540 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:14:44.409818 master-0 kubenswrapper[26474]: I0223 13:14:44.409795 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9b02d3c-f671-4850-8c6e-315044a1376c-etcd-client\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.410108 master-0 kubenswrapper[26474]: I0223 13:14:44.410056 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:14:44.410326 master-0 kubenswrapper[26474]: I0223 13:14:44.410279 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:14:44.410421 master-0 kubenswrapper[26474]: I0223 13:14:44.410363 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:14:44.410498 master-0 kubenswrapper[26474]: I0223 13:14:44.410451 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:14:44.410832 master-0 kubenswrapper[26474]: I0223 13:14:44.410792 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.410896 master-0 kubenswrapper[26474]: I0223 13:14:44.410873 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:14:44.411187 master-0 kubenswrapper[26474]: I0223 13:14:44.411147 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:14:44.411466 master-0 kubenswrapper[26474]: I0223 13:14:44.411429 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.411551 master-0 kubenswrapper[26474]: I0223 13:14:44.411503 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:14:44.411551 master-0 kubenswrapper[26474]: I0223 13:14:44.411542 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:14:44.411749 master-0 kubenswrapper[26474]: I0223 13:14:44.411709 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:14:44.411749 master-0 kubenswrapper[26474]: I0223 13:14:44.411747 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:14:44.411881 master-0 kubenswrapper[26474]: I0223 13:14:44.411858 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:14:44.411937 master-0 kubenswrapper[26474]: I0223 13:14:44.411879 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:14:44.415663 master-0 kubenswrapper[26474]: I0223 13:14:44.415591 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:14:44.422358 master-0 kubenswrapper[26474]: I0223 13:14:44.421942 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:14:44.424253 master-0 kubenswrapper[26474]: I0223 13:14:44.424223 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:14:44.426232 master-0 kubenswrapper[26474]: I0223 13:14:44.426165 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.427466 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.427886 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.427994 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.428820 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.428856 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.429167 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.429627 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:14:44.429890 master-0 kubenswrapper[26474]: I0223 13:14:44.429703 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:14:44.430661 master-0 kubenswrapper[26474]: I0223 13:14:44.430620 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/92eaa2e2-61cd-4279-a81f-72db51308148-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.431715 master-0 kubenswrapper[26474]: I0223 13:14:44.430882 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:14:44.431715 master-0 kubenswrapper[26474]: I0223 13:14:44.430952 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:14:44.431715 master-0 kubenswrapper[26474]: I0223 13:14:44.431116 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:14:44.432084 master-0 kubenswrapper[26474]: I0223 13:14:44.431861 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:14:44.432084 master-0 kubenswrapper[26474]: I0223 13:14:44.431911 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.432084 master-0 kubenswrapper[26474]: I0223 13:14:44.431868 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:14:44.438500 master-0 kubenswrapper[26474]: I0223 13:14:44.437689 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:14:44.439295 master-0 kubenswrapper[26474]: I0223 13:14:44.439134 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:14:44.440475 master-0 kubenswrapper[26474]: I0223 13:14:44.440104 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 23 13:14:44.441066 master-0 kubenswrapper[26474]: I0223 13:14:44.441014 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:14:44.442780 master-0 kubenswrapper[26474]: I0223 13:14:44.442737 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:14:44.445922 master-0 kubenswrapper[26474]: I0223 13:14:44.445882 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:14:44.447396 master-0 kubenswrapper[26474]: I0223 13:14:44.446799 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:14:44.448156 master-0 kubenswrapper[26474]: I0223 13:14:44.448107 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:14:44.448608 master-0 kubenswrapper[26474]: I0223 13:14:44.448543 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:14:44.449629 master-0 kubenswrapper[26474]: I0223 13:14:44.449598 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92eaa2e2-61cd-4279-a81f-72db51308148-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:44.450863 master-0 kubenswrapper[26474]: I0223 13:14:44.450811 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:14:44.450987 master-0 kubenswrapper[26474]: I0223 13:14:44.450875 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:14:44.451124 master-0 kubenswrapper[26474]: I0223 13:14:44.451088 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:14:44.451311 master-0 kubenswrapper[26474]: I0223 13:14:44.451261 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:14:44.451390 master-0 kubenswrapper[26474]: I0223 13:14:44.451335 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:14:44.451510 master-0 kubenswrapper[26474]: I0223 13:14:44.451475 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:14:44.453323 master-0 kubenswrapper[26474]: I0223 13:14:44.453284 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:14:44.456666 master-0 kubenswrapper[26474]: I0223 13:14:44.456626 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:14:44.460965 master-0 kubenswrapper[26474]: I0223 13:14:44.460921 26474 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.472989 26474 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.473430 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.476510 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.476543 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.476559 26474 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:14:44.480497 master-0 kubenswrapper[26474]: I0223 13:14:44.476874 26474 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:14:44.499956 master-0 kubenswrapper[26474]: I0223 13:14:44.499904 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:14:44.501262 master-0 kubenswrapper[26474]: I0223 13:14:44.501195 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b825\" (UniqueName: \"kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:44.501332 master-0 kubenswrapper[26474]: I0223 13:14:44.501268 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.501332 master-0 kubenswrapper[26474]: I0223 13:14:44.501314 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501453 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501509 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501544 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501570 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501666 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501696 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501722 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501748 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501769 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501793 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501814 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjcw\" (UniqueName: \"kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.501915 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:44.502081 master-0 kubenswrapper[26474]: I0223 13:14:44.502098 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e941c759-ab95-4b30-a571-6c132ab0e639-metrics-certs\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502103 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502156 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502183 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502200 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502209 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbhv\" (UniqueName: \"kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502244 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502288 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502312 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502360 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502387 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502411 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502440 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502464 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502488 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502516 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7b4r\" (UniqueName: \"kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502543 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502568 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502607 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502633 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d58817c-970f-47b1-a5a5-a491f3e93426-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502690 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502768 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502795 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502820 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:44.502826 master-0 kubenswrapper[26474]: I0223 13:14:44.502825 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.502854 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-env-overrides\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.502918 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.502896 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.502976 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503001 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503010 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-textfile\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503025 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503046 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503067 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503091 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71cb2f21-6d27-411f-9c2f-d5fa286895a7-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503091 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503135 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503166 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cksnd\" (UniqueName: \"kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503191 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503208 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-multus-daemon-config\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503213 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503251 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503259 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503273 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503331 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503382 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b0d84-a344-43e4-b9c4-c8e0670528de-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503400 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503428 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503471 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71cb2f21-6d27-411f-9c2f-d5fa286895a7-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503467 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503531 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503555 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503574 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503579 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6ff7868e-f0d3-4c63-901f-fed11d623cf1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503591 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503626 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503656 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503681 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgnr\" (UniqueName: \"kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503703 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503720 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-config\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503724 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503749 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503769 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503796 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503822 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503848 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503897 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:44.503832 master-0 kubenswrapper[26474]: I0223 13:14:44.503919 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.503945 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.503981 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504078 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504106 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504132 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504185 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504215 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504241 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504263 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504287 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504317 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504365 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504387 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504413 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504432 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504450 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504467 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504490 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504512 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504741 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-utilities\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504986 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/878aa813-a8b9-4a6f-8086-778df276d0d7-trusted-ca\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505105 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bed6748-374e-4d8a-92a0-36d7d735d6b7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.504806 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505168 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505195 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505219 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505244 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505274 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505300 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505324 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjkkc\" (UniqueName: \"kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505364 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6d4r\" (UniqueName: \"kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505386 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505408 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505442 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjfj\" (UniqueName: \"kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505462 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpnzd\" (UniqueName: \"kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd\") pod \"network-check-source-58fb6744f5-b7dr8\" (UID: \"b12352eb-04d7-4419-b1bf-d08bca9da599\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505490 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505512 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505532 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505553 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505572 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505591 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505610 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505628 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505647 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505673 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505694 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505715 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9l8\" (UniqueName: \"kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505736 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505755 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505775 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505797 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505815 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505833 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505851 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505869 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505889 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmcjv\" (UniqueName: \"kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505908 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.505926 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506693 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-cache\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506304 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/878aa813-a8b9-4a6f-8086-778df276d0d7-metrics-tls\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506443 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daf0176-92e7-4642-8643-4afbefb77235-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506488 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506570 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-metrics-tls\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506757 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-key\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506597 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506694 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506574 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-config\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506820 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z8xh\" (UniqueName: \"kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506851 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506877 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq2x\" (UniqueName: \"kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506896 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506918 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506939 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506963 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.506987 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507007 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507025 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507045 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507080 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507124 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507160 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507189 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507221 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507255 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507286 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507315 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507272 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-iptables-alerter-script\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507387 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507394 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-catalog-content\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507414 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507416 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-catalog-content\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507453 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64s6\" (UniqueName: \"kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507501 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507566 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507592 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507687 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507715 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zks\" (UniqueName: \"kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks\") pod \"migrator-5c85bff57-xzh2g\" (UID: \"8a544f5a-06b6-4297-a845-d81e9ab9ece7\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507739 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507763 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507783 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507804 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507828 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78mm\" (UniqueName: \"kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35e97ed9-695d-483e-8878-4f231c79f1d2-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507848 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507868 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507888 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507909 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507930 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507952 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507971 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.507990 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508014 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508033 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508053 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508073 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508091 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508113 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508133 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8c76\" (UniqueName: \"kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508152 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508170 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508189 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508210 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508230 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508264 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508291 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508300 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508311 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:44.508451 master-0 kubenswrapper[26474]: I0223 13:14:44.508330 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508558 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-utilities\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508744 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-config\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508789 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18386753-ec74-456d-838d-98c07c169b4b-webhook-cert\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508901 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-catalog-content\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508943 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.508968 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22p85\" (UniqueName: \"kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509002 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509023 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509071 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509094 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b0d84-a344-43e4-b9c4-c8e0670528de-config\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509093 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509142 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509174 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509201 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e96ce488-0099-43de-9933-425b7c981055-catalog-content\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509219 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509310 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509354 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509373 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509397 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509416 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509434 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509451 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509470 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509491 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509511 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndf8h\" (UniqueName: \"kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509520 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0a976c-1492-4989-a5ff-e386564dd6ba-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509530 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509570 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght2z\" (UniqueName: \"kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509606 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509632 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xp47\" (UniqueName: \"kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509658 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509684 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509713 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509742 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509772 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xw4\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509798 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509819 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509858 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509882 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509900 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509917 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509938 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509958 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l66s\" (UniqueName: \"kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509976 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2gm\" (UniqueName: \"kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509994 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510011 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510029 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510046 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510062 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d58817c-970f-47b1-a5a5-a491f3e93426-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510066 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510095 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510114 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510125 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0e3072-a35c-4404-891c-f31fafd0b4b1-utilities\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510133 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510384 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ce55de54-8441-4a16-8b57-598042869000-snapshots\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.510559 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.509635 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-env-overrides\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511184 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511225 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511411 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511441 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftvv\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511461 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511480 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfl9v\" (UniqueName: \"kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511499 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511496 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762249c6-b548-4733-8b78-64f73430bfbd-tmpfs\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511588 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511637 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511673 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdbct\" (UniqueName: \"kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511704 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511726 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f348bffa-b2f6-4695-88a7-923625e7fb02-serving-cert\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511740 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511779 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511811 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511840 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511866 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511896 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511909 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-signing-cabundle\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511928 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvg7b\" (UniqueName: \"kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511956 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.511729 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-tuned\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512021 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovn-node-metrics-cert\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512184 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512268 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0a976c-1492-4989-a5ff-e386564dd6ba-config\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512517 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8422281d-af45-4f17-8f15-ac3fd9da4bbc-tmp\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512530 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3daf0176-92e7-4642-8643-4afbefb77235-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512569 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7c80f4d-6b28-44f4-beef-01e705260452-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512649 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512685 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znzzv\" (UniqueName: \"kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512716 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512763 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512802 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.512878 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5284f9-cbb7-400b-ab39-bfef60ec198b-utilities\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513169 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513196 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513210 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513238 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-config\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513292 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.513580 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515403 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515458 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn9mt\" (UniqueName: \"kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515486 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515516 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515537 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515562 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j744d\" (UniqueName: \"kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515585 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515606 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515627 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515651 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515675 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515699 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515710 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515719 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515850 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxjf\" (UniqueName: \"kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515887 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6cl\" (UniqueName: \"kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515915 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515943 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.515977 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sh26\" (UniqueName: \"kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516003 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl5r2\" (UniqueName: \"kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516018 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-ovnkube-script-lib\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516031 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516065 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516096 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516123 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dcr\" (UniqueName: \"kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr\") pod \"csi-snapshot-controller-6847bb4785-zw4nq\" (UID: \"5793184d-de96-49ad-a060-0fa0cf278a9c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516180 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9ea16701-bd22-4fc0-90ea-f114b52574f8-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516627 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516678 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516703 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516731 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516761 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516788 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.518724 master-0 kubenswrapper[26474]: I0223 13:14:44.516812 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.516833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.516857 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8dp\" (UniqueName: \"kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517331 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517414 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517448 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d48d286d-4f37-4027-86cd-1580e6076613-cni-binary-copy\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517449 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517491 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517525 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517735 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/9bed6748-374e-4d8a-92a0-36d7d735d6b7-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517741 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f348bffa-b2f6-4695-88a7-923625e7fb02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517878 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/18386753-ec74-456d-838d-98c07c169b4b-ovnkube-identity-cm\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517924 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7c80f4d-6b28-44f4-beef-01e705260452-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:44.524860 master-0 kubenswrapper[26474]: I0223 13:14:44.517949 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/99f14e64-228f-4b9e-991f-ee398fe7bb8a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.532481 master-0 kubenswrapper[26474]: I0223 13:14:44.532422 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:14:44.540035 master-0 kubenswrapper[26474]: I0223 13:14:44.539965 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.553792 master-0 kubenswrapper[26474]: I0223 13:14:44.553731 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:14:44.572576 master-0 kubenswrapper[26474]: I0223 13:14:44.572509 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:14:44.578541 master-0 kubenswrapper[26474]: I0223 13:14:44.578490 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.606253 master-0 kubenswrapper[26474]: I0223 13:14:44.606126 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9jc\" (UniqueName: \"kubernetes.io/projected/d71885db-c29e-429a-aa1f-1c274796a69f-kube-api-access-9z9jc\") pod \"openshift-controller-manager-operator-584cc7bcb5-dpxl4\" (UID: \"d71885db-c29e-429a-aa1f-1c274796a69f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-dpxl4" Feb 23 13:14:44.630213 master-0 kubenswrapper[26474]: I0223 13:14:44.629428 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.630213 master-0 kubenswrapper[26474]: I0223 13:14:44.629533 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.630213 master-0 kubenswrapper[26474]: I0223 13:14:44.629860 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.630213 master-0 kubenswrapper[26474]: I0223 13:14:44.629862 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-system-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.630213 master-0 kubenswrapper[26474]: I0223 13:14:44.629923 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.629947 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-audit-dir\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630285 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-sys\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.629968 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630387 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-lib-modules\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630236 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630472 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630522 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630589 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-var-lib-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630608 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-cni-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630665 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630701 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630699 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-systemd-units\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630731 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630769 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-ovn\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630885 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630903 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.630962 master-0 kubenswrapper[26474]: I0223 13:14:44.630841 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-netns\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.631864 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632202 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-wtmp\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632236 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632261 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632320 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-netd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632360 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-k8s-cni-cncf-io\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632451 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632552 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632691 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632724 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632792 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-kubelet\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632837 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.632848 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633001 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633015 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633099 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-systemd\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633159 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633184 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633278 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-conf-dir\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.633454 master-0 kubenswrapper[26474]: I0223 13:14:44.633382 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mk9\" (UniqueName: \"kubernetes.io/projected/f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b-kube-api-access-m6mk9\") pod \"dns-operator-8c7d49845-g8fdn\" (UID: \"f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-g8fdn" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633486 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysctl-conf\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633277 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633444 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6ff7868e-f0d3-4c63-901f-fed11d623cf1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633571 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633591 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633613 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633642 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633674 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-cnibin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633718 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-bin\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633761 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633822 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-system-cni-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.633931 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634134 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634181 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634225 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634262 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634284 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634305 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634357 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634379 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634401 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634418 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634464 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634488 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634519 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634546 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-host-etc-kube\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634610 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-multus-certs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634632 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634641 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-os-release\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634672 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-cni-multus\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634692 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24d878bd-05cd-414e-94c1-a3e9ce637331-etc-ssl-certs\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634705 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-kubernetes\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634734 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/922e0be5-23c2-481e-89be-e918dc4ce90c-node-pullsecrets\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634753 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634772 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634782 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-etc-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634805 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-run-netns\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634813 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-dir\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634890 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634912 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-etc-kubernetes\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.634878 master-0 kubenswrapper[26474]: I0223 13:14:44.634919 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.634952 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6dc83a57-34c5-4c64-97d3-b6191ba690eb-hosts-file\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.634966 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635045 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635051 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635137 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635143 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-sys\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635260 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-host\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635731 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635797 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635878 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.635969 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636022 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636369 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-host-slash\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636410 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-modprobe-d\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636433 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-run-openvswitch\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636456 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-hostroot\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636582 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-slash\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636590 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636642 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636688 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-host-var-lib-kubelet\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636731 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636758 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ae8b0e50-59ee-44a9-9a66-8febb833b771-root\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636761 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636790 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-host-cni-bin\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.636812 master-0 kubenswrapper[26474]: I0223 13:14:44.636839 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.636884 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.636890 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-node-log\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.636951 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-log-socket\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637022 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-os-release\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637075 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637116 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637136 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637184 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637204 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637224 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637303 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637352 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637549 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d48d286d-4f37-4027-86cd-1580e6076613-multus-socket-dir-parent\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:44.637580 master-0 kubenswrapper[26474]: I0223 13:14:44.637588 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99f14e64-228f-4b9e-991f-ee398fe7bb8a-cnibin\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637639 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-sysconfig\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637673 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-var-lib-kubelet\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637712 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-etc-systemd\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637739 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/57803492-e1dd-4994-8330-1e9b393d54fd-rootfs\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637771 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8422281d-af45-4f17-8f15-ac3fd9da4bbc-run\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:44.637978 master-0 kubenswrapper[26474]: I0223 13:14:44.637804 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5b459832-b875-49a6-a7c3-253fa6c8e45a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:44.653729 master-0 kubenswrapper[26474]: I0223 13:14:44.653660 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfqmb\" (UniqueName: \"kubernetes.io/projected/d9b02d3c-f671-4850-8c6e-315044a1376c-kube-api-access-qfqmb\") pod \"etcd-operator-545bf96f4d-dk5t4\" (UID: \"d9b02d3c-f671-4850-8c6e-315044a1376c\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-dk5t4" Feb 23 13:14:44.654332 master-0 kubenswrapper[26474]: I0223 13:14:44.654248 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-t7xq8" Feb 23 13:14:44.676089 master-0 kubenswrapper[26474]: I0223 13:14:44.676020 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:14:44.692762 master-0 kubenswrapper[26474]: I0223 13:14:44.692280 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:14:44.695459 master-0 kubenswrapper[26474]: I0223 13:14:44.694632 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-encryption-config\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.713439 master-0 kubenswrapper[26474]: I0223 13:14:44.713366 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:14:44.722952 master-0 kubenswrapper[26474]: I0223 13:14:44.722898 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/47dedc5d-1288-4020-b481-5dca68a7d437-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:44.733272 master-0 kubenswrapper[26474]: I0223 13:14:44.733170 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-dsztp" Feb 23 13:14:44.758950 master-0 kubenswrapper[26474]: I0223 13:14:44.758800 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:14:44.829191 master-0 kubenswrapper[26474]: I0223 13:14:44.829116 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-client\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.834968 master-0 kubenswrapper[26474]: I0223 13:14:44.834892 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:14:44.835313 master-0 kubenswrapper[26474]: I0223 13:14:44.835264 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:14:44.836790 master-0 kubenswrapper[26474]: I0223 13:14:44.836745 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:14:44.844001 master-0 kubenswrapper[26474]: I0223 13:14:44.843943 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/affc63b7-db45-429d-82ff-e50f6aae51dc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:14:44.844001 master-0 kubenswrapper[26474]: I0223 13:14:44.843942 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922e0be5-23c2-481e-89be-e918dc4ce90c-serving-cert\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.844169 master-0 kubenswrapper[26474]: I0223 13:14:44.844089 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-audit\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.854445 master-0 kubenswrapper[26474]: I0223 13:14:44.854393 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:14:44.882025 master-0 kubenswrapper[26474]: I0223 13:14:44.881758 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:14:44.885488 master-0 kubenswrapper[26474]: I0223 13:14:44.885419 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06ccd378-23ee-49b7-a435-4b01de772155-proxy-tls\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:44.893157 master-0 kubenswrapper[26474]: I0223 13:14:44.893124 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jh64m" Feb 23 13:14:44.912580 master-0 kubenswrapper[26474]: I0223 13:14:44.912507 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:14:44.933660 master-0 kubenswrapper[26474]: I0223 13:14:44.933584 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:14:44.944645 master-0 kubenswrapper[26474]: I0223 13:14:44.942664 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:44.944645 master-0 kubenswrapper[26474]: I0223 13:14:44.942779 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06ccd378-23ee-49b7-a435-4b01de772155-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:44.944645 master-0 kubenswrapper[26474]: I0223 13:14:44.943029 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57803492-e1dd-4994-8330-1e9b393d54fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:44.954411 master-0 kubenswrapper[26474]: I0223 13:14:44.954259 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:14:44.977833 master-0 kubenswrapper[26474]: I0223 13:14:44.977737 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:14:44.980681 master-0 kubenswrapper[26474]: I0223 13:14:44.980620 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-trusted-ca-bundle\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:44.996656 master-0 kubenswrapper[26474]: I0223 13:14:44.992886 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:14:45.015803 master-0 kubenswrapper[26474]: I0223 13:14:45.015687 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:14:45.034414 master-0 kubenswrapper[26474]: I0223 13:14:45.031930 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:45.034414 master-0 kubenswrapper[26474]: I0223 13:14:45.032658 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:45.036323 master-0 kubenswrapper[26474]: I0223 13:14:45.036297 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:14:45.048718 master-0 kubenswrapper[26474]: I0223 13:14:45.048105 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:45.052454 master-0 kubenswrapper[26474]: I0223 13:14:45.052418 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:14:45.056714 master-0 kubenswrapper[26474]: I0223 13:14:45.056682 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:45.082369 master-0 kubenswrapper[26474]: I0223 13:14:45.081635 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:14:45.087366 master-0 kubenswrapper[26474]: I0223 13:14:45.083191 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:45.121645 master-0 kubenswrapper[26474]: I0223 13:14:45.121451 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:14:45.125436 master-0 kubenswrapper[26474]: I0223 13:14:45.124484 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:14:45.125436 master-0 kubenswrapper[26474]: I0223 13:14:45.124933 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:45.129175 master-0 kubenswrapper[26474]: I0223 13:14:45.129124 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-config\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:45.148723 master-0 kubenswrapper[26474]: I0223 13:14:45.148623 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:14:45.153791 master-0 kubenswrapper[26474]: I0223 13:14:45.153682 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:45.154896 master-0 kubenswrapper[26474]: I0223 13:14:45.154833 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:14:45.161361 master-0 kubenswrapper[26474]: I0223 13:14:45.159530 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:14:45.161361 master-0 kubenswrapper[26474]: I0223 13:14:45.159621 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:14:45.161361 master-0 kubenswrapper[26474]: I0223 13:14:45.160063 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock" (OuterVolumeSpecName: "var-lock") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:14:45.161361 master-0 kubenswrapper[26474]: I0223 13:14:45.160215 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:14:45.167243 master-0 kubenswrapper[26474]: I0223 13:14:45.164470 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:14:45.167243 master-0 kubenswrapper[26474]: I0223 13:14:45.164531 26474 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27c1e327-cb40-4b36-b371-20d1271b8d8d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:14:45.173948 master-0 kubenswrapper[26474]: I0223 13:14:45.173896 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6tkzc" Feb 23 13:14:45.194669 master-0 kubenswrapper[26474]: I0223 13:14:45.194518 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:14:45.200766 master-0 kubenswrapper[26474]: I0223 13:14:45.200722 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0a29266-d968-444d-82bb-085ff1d6e506-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:45.200940 master-0 kubenswrapper[26474]: I0223 13:14:45.200867 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ccbaed9-ab28-47c0-a585-648b9251fd11-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:45.200979 master-0 kubenswrapper[26474]: I0223 13:14:45.200953 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8b0e50-59ee-44a9-9a66-8febb833b771-metrics-client-ca\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:45.207893 master-0 kubenswrapper[26474]: I0223 13:14:45.207846 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:45.213987 master-0 kubenswrapper[26474]: I0223 13:14:45.213759 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:14:45.217601 master-0 kubenswrapper[26474]: I0223 13:14:45.217563 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:45.237037 master-0 kubenswrapper[26474]: I0223 13:14:45.236985 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-m9scs" Feb 23 13:14:45.252781 master-0 kubenswrapper[26474]: I0223 13:14:45.252673 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:14:45.256631 master-0 kubenswrapper[26474]: I0223 13:14:45.256598 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0a29266-d968-444d-82bb-085ff1d6e506-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:45.272773 master-0 kubenswrapper[26474]: I0223 13:14:45.272709 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:14:45.282862 master-0 kubenswrapper[26474]: I0223 13:14:45.282820 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-etcd-serving-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:45.292932 master-0 kubenswrapper[26474]: I0223 13:14:45.292865 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:14:45.300666 master-0 kubenswrapper[26474]: I0223 13:14:45.300591 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47dedc5d-1288-4020-b481-5dca68a7d437-images\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:45.312560 master-0 kubenswrapper[26474]: I0223 13:14:45.312509 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:14:45.313821 master-0 kubenswrapper[26474]: I0223 13:14:45.313774 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/922e0be5-23c2-481e-89be-e918dc4ce90c-image-import-ca\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:45.332700 master-0 kubenswrapper[26474]: I0223 13:14:45.332641 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:14:45.348149 master-0 kubenswrapper[26474]: I0223 13:14:45.347580 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:14:45.352772 master-0 kubenswrapper[26474]: I0223 13:14:45.352711 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rlrxt" Feb 23 13:14:45.385941 master-0 kubenswrapper[26474]: I0223 13:14:45.383619 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsvg\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-kube-api-access-9qsvg\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:45.407609 master-0 kubenswrapper[26474]: I0223 13:14:45.407516 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92eaa2e2-61cd-4279-a81f-72db51308148-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-shphl\" (UID: \"92eaa2e2-61cd-4279-a81f-72db51308148\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-shphl" Feb 23 13:14:45.412522 master-0 kubenswrapper[26474]: I0223 13:14:45.412468 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:14:45.431378 master-0 kubenswrapper[26474]: I0223 13:14:45.431299 26474 request.go:700] Waited for 1.002306254s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-4whws&limit=500&resourceVersion=0 Feb 23 13:14:45.433067 master-0 kubenswrapper[26474]: I0223 13:14:45.433013 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4whws" Feb 23 13:14:45.452310 master-0 kubenswrapper[26474]: I0223 13:14:45.452245 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:14:45.454643 master-0 kubenswrapper[26474]: I0223 13:14:45.454595 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b459832-b875-49a6-a7c3-253fa6c8e45a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:45.473370 master-0 kubenswrapper[26474]: I0223 13:14:45.473298 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:14:45.473820 master-0 kubenswrapper[26474]: I0223 13:14:45.473762 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:45.491948 master-0 kubenswrapper[26474]: I0223 13:14:45.491902 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:14:45.502592 master-0 kubenswrapper[26474]: E0223 13:14:45.502519 26474 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502528 26474 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502692 26474 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502730 26474 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502749 26474 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502546 26474 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502768 26474 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502659 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config podName:f47fa225-93fd-458b-b450-a0411e629afd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002621539 +0000 UTC m=+7.849129216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config") pod "route-controller-manager-648db577cf-2sqzl" (UID: "f47fa225-93fd-458b-b450-a0411e629afd") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502874 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs podName:bdad149d-da6f-49ac-85e5-deb01f161166 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002853285 +0000 UTC m=+7.849360972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs") pod "machine-config-server-97rhg" (UID: "bdad149d-da6f-49ac-85e5-deb01f161166") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502901 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle podName:ce55de54-8441-4a16-8b57-598042869000 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002891936 +0000 UTC m=+7.849399623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle") pod "insights-operator-59b498fcfb-sswng" (UID: "ce55de54-8441-4a16-8b57-598042869000") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.502902 master-0 kubenswrapper[26474]: E0223 13:14:45.502922 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates podName:54001c8e-cb57-47dc-8594-9daed4190bda nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002914756 +0000 UTC m=+7.849422443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates") pod "prometheus-operator-admission-webhook-75d56db95f-ld22t" (UID: "54001c8e-cb57-47dc-8594-9daed4190bda") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.502947 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert podName:ce55de54-8441-4a16-8b57-598042869000 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002937897 +0000 UTC m=+7.849445584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert") pod "insights-operator-59b498fcfb-sswng" (UID: "ce55de54-8441-4a16-8b57-598042869000") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.502968 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca podName:945907dd-f6b3-400f-b539-e1310eb11dd7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002961137 +0000 UTC m=+7.849468824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-87hx7" (UID: "945907dd-f6b3-400f-b539-e1310eb11dd7") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.502986 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config podName:5ede583b-44b0-42af-92c9-f7b8938f7843 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.002976128 +0000 UTC m=+7.849483815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config") pod "cluster-baremetal-operator-d6bb9bb76-vfkqg" (UID: "5ede583b-44b0-42af-92c9-f7b8938f7843") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503048 26474 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503141 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images podName:5ede583b-44b0-42af-92c9-f7b8938f7843 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.003119981 +0000 UTC m=+7.849627698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images") pod "cluster-baremetal-operator-d6bb9bb76-vfkqg" (UID: "5ede583b-44b0-42af-92c9-f7b8938f7843") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503162 26474 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503207 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume podName:2acc6d35-5679-4fac-970f-3d2ff954cc33 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.003195323 +0000 UTC m=+7.849703010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume") pod "dns-default-ljphn" (UID: "2acc6d35-5679-4fac-970f-3d2ff954cc33") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503356 26474 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.503479 master-0 kubenswrapper[26474]: E0223 13:14:45.503392 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.003384127 +0000 UTC m=+7.849891814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.504682 master-0 kubenswrapper[26474]: E0223 13:14:45.504640 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504682 master-0 kubenswrapper[26474]: E0223 13:14:45.504679 26474 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504711 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert podName:762249c6-b548-4733-8b78-64f73430bfbd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004689409 +0000 UTC m=+7.851197086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert") pod "packageserver-544948f94-sshjz" (UID: "762249c6-b548-4733-8b78-64f73430bfbd") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504727 26474 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504740 26474 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504716 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504738 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls podName:65ecd69f-3f1b-41d7-ba1f-225acaa735d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.00472863 +0000 UTC m=+7.851236397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls") pod "metrics-server-69f7f878d4-746vx" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504777 26474 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-639sbo1a4as7e: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504779 26474 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504789 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert podName:945907dd-f6b3-400f-b539-e1310eb11dd7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004771811 +0000 UTC m=+7.851279488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-87hx7" (UID: "945907dd-f6b3-400f-b539-e1310eb11dd7") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.504811 master-0 kubenswrapper[26474]: E0223 13:14:45.504806 26474 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.504751 26474 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.504815 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle podName:65ecd69f-3f1b-41d7-ba1f-225acaa735d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004805352 +0000 UTC m=+7.851313029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle") pod "metrics-server-69f7f878d4-746vx" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.504938 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images podName:a663ecaf-ced2-4c7d-91c8-44e94851f7d6 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004930585 +0000 UTC m=+7.851438262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images") pod "machine-config-operator-7f8c75f984-522th" (UID: "a663ecaf-ced2-4c7d-91c8-44e94851f7d6") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.504959 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert podName:762249c6-b548-4733-8b78-64f73430bfbd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004951146 +0000 UTC m=+7.851458933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert") pod "packageserver-544948f94-sshjz" (UID: "762249c6-b548-4733-8b78-64f73430bfbd") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.504982 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert podName:bf57b864-25d7-4420-9052-04dd580a9f7d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004974756 +0000 UTC m=+7.851482523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert") pod "cluster-autoscaler-operator-86b8dc6d6-xljfn" (UID: "bf57b864-25d7-4420-9052-04dd580a9f7d") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.505005 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.004996667 +0000 UTC m=+7.851504454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.505167 master-0 kubenswrapper[26474]: E0223 13:14:45.505025 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config podName:0d134032-1c35-4b69-9336-bcdc9c1cb87d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.005018097 +0000 UTC m=+7.851525864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-z7jgz" (UID: "0d134032-1c35-4b69-9336-bcdc9c1cb87d") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.506772 master-0 kubenswrapper[26474]: E0223 13:14:45.506736 26474 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506772 master-0 kubenswrapper[26474]: E0223 13:14:45.506766 26474 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506800 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs podName:73ba4f16-0217-4bf1-8fc2-6b385eda0771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006784721 +0000 UTC m=+7.853292398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs") pod "router-default-7b65dc9fcb-kcfgf" (UID: "73ba4f16-0217-4bf1-8fc2-6b385eda0771") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506820 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert podName:5ede583b-44b0-42af-92c9-f7b8938f7843 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006812212 +0000 UTC m=+7.853319989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert") pod "cluster-baremetal-operator-d6bb9bb76-vfkqg" (UID: "5ede583b-44b0-42af-92c9-f7b8938f7843") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506822 26474 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506833 26474 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506839 26474 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506866 26474 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506861 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config podName:bf57b864-25d7-4420-9052-04dd580a9f7d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006852733 +0000 UTC m=+7.853360510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-xljfn" (UID: "bf57b864-25d7-4420-9052-04dd580a9f7d") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506877 26474 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506892 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls podName:ae8b0e50-59ee-44a9-9a66-8febb833b771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006882583 +0000 UTC m=+7.853390370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls") pod "node-exporter-tv6s2" (UID: "ae8b0e50-59ee-44a9-9a66-8febb833b771") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506903 26474 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.506898 master-0 kubenswrapper[26474]: E0223 13:14:45.506908 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls podName:e5802841-52dc-4d15-a252-0eac70e9fbbc nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006900414 +0000 UTC m=+7.853408191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-686847ff5f-pqjsm" (UID: "e5802841-52dc-4d15-a252-0eac70e9fbbc") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.506931 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006922624 +0000 UTC m=+7.853430411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.506946 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate podName:73ba4f16-0217-4bf1-8fc2-6b385eda0771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006941325 +0000 UTC m=+7.853449182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate") pod "router-default-7b65dc9fcb-kcfgf" (UID: "73ba4f16-0217-4bf1-8fc2-6b385eda0771") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.506961 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert podName:90a694bb-fe3e-4478-bbb4-d2be9cd4c57f nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.006954615 +0000 UTC m=+7.853462292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert") pod "openshift-config-operator-6f47d587d6-8wrb6" (UID: "90a694bb-fe3e-4478-bbb4-d2be9cd4c57f") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.506992 26474 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.507025 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.007019667 +0000 UTC m=+7.853527344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.507078 26474 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.507360 master-0 kubenswrapper[26474]: E0223 13:14:45.507124 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config podName:9ea16701-bd22-4fc0-90ea-f114b52574f8 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.007114959 +0000 UTC m=+7.853622636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-r66qv" (UID: "9ea16701-bd22-4fc0-90ea-f114b52574f8") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508095 master-0 kubenswrapper[26474]: E0223 13:14:45.508063 26474 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508253 master-0 kubenswrapper[26474]: E0223 13:14:45.508103 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508253 master-0 kubenswrapper[26474]: E0223 13:14:45.508117 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token podName:bdad149d-da6f-49ac-85e5-deb01f161166 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.008106764 +0000 UTC m=+7.854614451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token") pod "machine-config-server-97rhg" (UID: "bdad149d-da6f-49ac-85e5-deb01f161166") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508253 master-0 kubenswrapper[26474]: E0223 13:14:45.508127 26474 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508253 master-0 kubenswrapper[26474]: E0223 13:14:45.508157 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert podName:898e6c96-73d5-4dc5-a383-986599a5bcd9 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.008144924 +0000 UTC m=+7.854652681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert") pod "catalog-operator-596f79dd6f-jbhn6" (UID: "898e6c96-73d5-4dc5-a383-986599a5bcd9") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.508253 master-0 kubenswrapper[26474]: E0223 13:14:45.508178 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls podName:3ccbaed9-ab28-47c0-a585-648b9251fd11 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.008167735 +0000 UTC m=+7.854675412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-p6hj2" (UID: "3ccbaed9-ab28-47c0-a585-648b9251fd11") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.509357 master-0 kubenswrapper[26474]: E0223 13:14:45.509298 26474 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.509357 master-0 kubenswrapper[26474]: E0223 13:14:45.509360 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle podName:73ba4f16-0217-4bf1-8fc2-6b385eda0771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.009350603 +0000 UTC m=+7.855858280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle") pod "router-default-7b65dc9fcb-kcfgf" (UID: "73ba4f16-0217-4bf1-8fc2-6b385eda0771") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509367 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509386 26474 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509397 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509413 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images podName:5b459832-b875-49a6-a7c3-253fa6c8e45a nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.009407974 +0000 UTC m=+7.855915651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" (UID: "5b459832-b875-49a6-a7c3-253fa6c8e45a") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509424 26474 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509444 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert podName:7cadeb05-9298-4bcf-b6f2-659c68eba020 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.009423455 +0000 UTC m=+7.855931172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert") pod "olm-operator-5499d7f7bb-c7hng" (UID: "7cadeb05-9298-4bcf-b6f2-659c68eba020") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.509483 master-0 kubenswrapper[26474]: E0223 13:14:45.509481 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert podName:898e6c96-73d5-4dc5-a383-986599a5bcd9 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.009461397 +0000 UTC m=+7.855969114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert") pod "catalog-operator-596f79dd6f-jbhn6" (UID: "898e6c96-73d5-4dc5-a383-986599a5bcd9") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.509815 master-0 kubenswrapper[26474]: E0223 13:14:45.509508 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca podName:f47fa225-93fd-458b-b450-a0411e629afd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.009497178 +0000 UTC m=+7.856004905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca") pod "route-controller-manager-648db577cf-2sqzl" (UID: "f47fa225-93fd-458b-b450-a0411e629afd") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.510549 master-0 kubenswrapper[26474]: E0223 13:14:45.510519 26474 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.510632 master-0 kubenswrapper[26474]: E0223 13:14:45.510561 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert podName:7cadeb05-9298-4bcf-b6f2-659c68eba020 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.010552343 +0000 UTC m=+7.857060020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert") pod "olm-operator-5499d7f7bb-c7hng" (UID: "7cadeb05-9298-4bcf-b6f2-659c68eba020") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.510632 master-0 kubenswrapper[26474]: E0223 13:14:45.510570 26474 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.510632 master-0 kubenswrapper[26474]: E0223 13:14:45.510587 26474 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.510632 master-0 kubenswrapper[26474]: E0223 13:14:45.510624 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls podName:5ede583b-44b0-42af-92c9-f7b8938f7843 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.010615994 +0000 UTC m=+7.857123671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-vfkqg" (UID: "5ede583b-44b0-42af-92c9-f7b8938f7843") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.510808 master-0 kubenswrapper[26474]: E0223 13:14:45.510651 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert podName:24d878bd-05cd-414e-94c1-a3e9ce637331 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.010630654 +0000 UTC m=+7.857138372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert") pod "cluster-version-operator-57476485-8jbxf" (UID: "24d878bd-05cd-414e-94c1-a3e9ce637331") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.511750 master-0 kubenswrapper[26474]: E0223 13:14:45.511683 26474 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.511750 master-0 kubenswrapper[26474]: E0223 13:14:45.511717 26474 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.511750 master-0 kubenswrapper[26474]: E0223 13:14:45.511734 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls podName:2acc6d35-5679-4fac-970f-3d2ff954cc33 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.011724481 +0000 UTC m=+7.858232158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls") pod "dns-default-ljphn" (UID: "2acc6d35-5679-4fac-970f-3d2ff954cc33") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.511750 master-0 kubenswrapper[26474]: E0223 13:14:45.511740 26474 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.512160 master-0 kubenswrapper[26474]: E0223 13:14:45.511769 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle podName:ce55de54-8441-4a16-8b57-598042869000 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.011754862 +0000 UTC m=+7.858262629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle") pod "insights-operator-59b498fcfb-sswng" (UID: "ce55de54-8441-4a16-8b57-598042869000") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.512160 master-0 kubenswrapper[26474]: E0223 13:14:45.511791 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls podName:a663ecaf-ced2-4c7d-91c8-44e94851f7d6 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.011779723 +0000 UTC m=+7.858287410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls") pod "machine-config-operator-7f8c75f984-522th" (UID: "a663ecaf-ced2-4c7d-91c8-44e94851f7d6") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.512160 master-0 kubenswrapper[26474]: E0223 13:14:45.511881 26474 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.512160 master-0 kubenswrapper[26474]: E0223 13:14:45.511922 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca podName:24d878bd-05cd-414e-94c1-a3e9ce637331 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.011911606 +0000 UTC m=+7.858419353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca") pod "cluster-version-operator-57476485-8jbxf" (UID: "24d878bd-05cd-414e-94c1-a3e9ce637331") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.513430 master-0 kubenswrapper[26474]: E0223 13:14:45.513391 26474 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513431 26474 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513454 26474 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513439 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls podName:7cf2e1eb-fb95-4401-9112-57aee9ebe1e6 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013429923 +0000 UTC m=+7.859937600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-6dzlv" (UID: "7cf2e1eb-fb95-4401-9112-57aee9ebe1e6") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513483 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013475624 +0000 UTC m=+7.859983301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513498 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013492274 +0000 UTC m=+7.859999951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513511 master-0 kubenswrapper[26474]: E0223 13:14:45.513502 26474 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: E0223 13:14:45.513513 26474 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: I0223 13:14:45.513550 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: E0223 13:14:45.513522 26474 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: E0223 13:14:45.513560 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap podName:9ea16701-bd22-4fc0-90ea-f114b52574f8 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013548706 +0000 UTC m=+7.860056393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-r66qv" (UID: "9ea16701-bd22-4fc0-90ea-f114b52574f8") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: E0223 13:14:45.513653 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls podName:0d134032-1c35-4b69-9336-bcdc9c1cb87d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013634048 +0000 UTC m=+7.860141795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-z7jgz" (UID: "0d134032-1c35-4b69-9336-bcdc9c1cb87d") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.513835 master-0 kubenswrapper[26474]: E0223 13:14:45.513674 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config podName:3ccbaed9-ab28-47c0-a585-648b9251fd11 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.013664809 +0000 UTC m=+7.860172586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-p6hj2" (UID: "3ccbaed9-ab28-47c0-a585-648b9251fd11") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.516862 master-0 kubenswrapper[26474]: E0223 13:14:45.516831 26474 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.516930 master-0 kubenswrapper[26474]: E0223 13:14:45.516872 26474 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.516930 master-0 kubenswrapper[26474]: E0223 13:14:45.516896 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs podName:65ecd69f-3f1b-41d7-ba1f-225acaa735d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.016881038 +0000 UTC m=+7.863388825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs") pod "metrics-server-69f7f878d4-746vx" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.516930 master-0 kubenswrapper[26474]: E0223 13:14:45.516909 26474 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.516930 master-0 kubenswrapper[26474]: E0223 13:14:45.516924 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config podName:0d134032-1c35-4b69-9336-bcdc9c1cb87d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.016912629 +0000 UTC m=+7.863420306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config") pod "machine-approver-7dd9c7d7b9-z7jgz" (UID: "0d134032-1c35-4b69-9336-bcdc9c1cb87d") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.516930 master-0 kubenswrapper[26474]: E0223 13:14:45.516876 26474 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.517092 master-0 kubenswrapper[26474]: E0223 13:14:45.516963 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert podName:f47fa225-93fd-458b-b450-a0411e629afd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.01695709 +0000 UTC m=+7.863464767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert") pod "route-controller-manager-648db577cf-2sqzl" (UID: "f47fa225-93fd-458b-b450-a0411e629afd") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.517092 master-0 kubenswrapper[26474]: E0223 13:14:45.516983 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth podName:73ba4f16-0217-4bf1-8fc2-6b385eda0771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.01697177 +0000 UTC m=+7.863479537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth") pod "router-default-7b65dc9fcb-kcfgf" (UID: "73ba4f16-0217-4bf1-8fc2-6b385eda0771") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518114 master-0 kubenswrapper[26474]: E0223 13:14:45.518086 26474 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518165 master-0 kubenswrapper[26474]: E0223 13:14:45.518128 26474 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518196 master-0 kubenswrapper[26474]: E0223 13:14:45.518139 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls podName:57803492-e1dd-4994-8330-1e9b393d54fd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.018128118 +0000 UTC m=+7.864635885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls") pod "machine-config-daemon-q8bjq" (UID: "57803492-e1dd-4994-8330-1e9b393d54fd") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518196 master-0 kubenswrapper[26474]: E0223 13:14:45.518184 26474 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.518252 master-0 kubenswrapper[26474]: E0223 13:14:45.518191 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls podName:9ea16701-bd22-4fc0-90ea-f114b52574f8 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.01818013 +0000 UTC m=+7.864687817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-r66qv" (UID: "9ea16701-bd22-4fc0-90ea-f114b52574f8") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518252 master-0 kubenswrapper[26474]: E0223 13:14:45.518223 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles podName:65ecd69f-3f1b-41d7-ba1f-225acaa735d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.01821381 +0000 UTC m=+7.864721617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles") pod "metrics-server-69f7f878d4-746vx" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.518252 master-0 kubenswrapper[26474]: E0223 13:14:45.518225 26474 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.518252 master-0 kubenswrapper[26474]: E0223 13:14:45.518223 26474 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.518385 master-0 kubenswrapper[26474]: E0223 13:14:45.518268 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle podName:65ecd69f-3f1b-41d7-ba1f-225acaa735d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.018261052 +0000 UTC m=+7.864768739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle") pod "metrics-server-69f7f878d4-746vx" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:45.518385 master-0 kubenswrapper[26474]: E0223 13:14:45.518290 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config podName:ae8b0e50-59ee-44a9-9a66-8febb833b771 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:46.018278942 +0000 UTC m=+7.864786629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config") pod "node-exporter-tv6s2" (UID: "ae8b0e50-59ee-44a9-9a66-8febb833b771") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:45.533172 master-0 kubenswrapper[26474]: I0223 13:14:45.533129 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:14:45.553792 master-0 kubenswrapper[26474]: I0223 13:14:45.553729 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-2hz68" Feb 23 13:14:45.573109 master-0 kubenswrapper[26474]: I0223 13:14:45.573042 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:14:45.592631 master-0 kubenswrapper[26474]: I0223 13:14:45.592575 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:14:45.613002 master-0 kubenswrapper[26474]: I0223 13:14:45.612924 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:14:45.633139 master-0 kubenswrapper[26474]: I0223 13:14:45.633050 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:14:45.652265 master-0 kubenswrapper[26474]: I0223 13:14:45.652190 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:14:45.672640 master-0 kubenswrapper[26474]: I0223 13:14:45.672577 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:14:45.692236 master-0 kubenswrapper[26474]: I0223 13:14:45.692177 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:14:45.711793 master-0 kubenswrapper[26474]: I0223 13:14:45.711747 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:14:45.732539 master-0 kubenswrapper[26474]: I0223 13:14:45.732462 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:14:45.752530 master-0 kubenswrapper[26474]: I0223 13:14:45.752466 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9ppv8" Feb 23 13:14:45.773108 master-0 kubenswrapper[26474]: I0223 13:14:45.773050 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:14:45.793479 master-0 kubenswrapper[26474]: I0223 13:14:45.793411 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:14:45.812649 master-0 kubenswrapper[26474]: I0223 13:14:45.812567 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:14:45.834611 master-0 kubenswrapper[26474]: I0223 13:14:45.834551 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:14:45.853869 master-0 kubenswrapper[26474]: I0223 13:14:45.853779 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:14:45.873082 master-0 kubenswrapper[26474]: I0223 13:14:45.873008 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:14:45.893260 master-0 kubenswrapper[26474]: I0223 13:14:45.893195 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:14:45.912548 master-0 kubenswrapper[26474]: I0223 13:14:45.912489 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:14:45.933390 master-0 kubenswrapper[26474]: I0223 13:14:45.933314 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:14:45.953229 master-0 kubenswrapper[26474]: I0223 13:14:45.953164 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:14:45.972099 master-0 kubenswrapper[26474]: I0223 13:14:45.972055 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:14:45.993900 master-0 kubenswrapper[26474]: I0223 13:14:45.993833 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xfzk8" Feb 23 13:14:46.012672 master-0 kubenswrapper[26474]: I0223 13:14:46.012599 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:14:46.032636 master-0 kubenswrapper[26474]: I0223 13:14:46.032549 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-46ht7" Feb 23 13:14:46.039748 master-0 kubenswrapper[26474]: I0223 13:14:46.039703 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/1.log" Feb 23 13:14:46.041545 master-0 kubenswrapper[26474]: I0223 13:14:46.041491 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="f29c0801cb73a88db37a5dde38238b8a02b3aa465a16ef32b1a402a776062703" exitCode=255 Feb 23 13:14:46.041644 master-0 kubenswrapper[26474]: I0223 13:14:46.041618 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:46.053146 master-0 kubenswrapper[26474]: I0223 13:14:46.053096 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-v42tl" Feb 23 13:14:46.073015 master-0 kubenswrapper[26474]: I0223 13:14:46.072969 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:14:46.086965 master-0 kubenswrapper[26474]: I0223 13:14:46.086916 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:46.087095 master-0 kubenswrapper[26474]: I0223 13:14:46.086996 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:46.087305 master-0 kubenswrapper[26474]: I0223 13:14:46.087247 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:46.087598 master-0 kubenswrapper[26474]: I0223 13:14:46.087570 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.087639 master-0 kubenswrapper[26474]: I0223 13:14:46.087611 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.087670 master-0 kubenswrapper[26474]: I0223 13:14:46.087640 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:46.087670 master-0 kubenswrapper[26474]: I0223 13:14:46.087663 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:46.087725 master-0 kubenswrapper[26474]: I0223 13:14:46.087688 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.087725 master-0 kubenswrapper[26474]: I0223 13:14:46.087713 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:46.087854 master-0 kubenswrapper[26474]: I0223 13:14:46.087821 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b459832-b875-49a6-a7c3-253fa6c8e45a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:46.087979 master-0 kubenswrapper[26474]: I0223 13:14:46.087950 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.088055 master-0 kubenswrapper[26474]: I0223 13:14:46.088029 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:46.088092 master-0 kubenswrapper[26474]: I0223 13:14:46.088059 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:46.088131 master-0 kubenswrapper[26474]: I0223 13:14:46.088100 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-srv-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:46.088178 master-0 kubenswrapper[26474]: I0223 13:14:46.088150 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.088307 master-0 kubenswrapper[26474]: I0223 13:14:46.088278 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.088582 master-0 kubenswrapper[26474]: I0223 13:14:46.088541 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24d878bd-05cd-414e-94c1-a3e9ce637331-service-ca\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:46.088644 master-0 kubenswrapper[26474]: I0223 13:14:46.088618 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:46.088684 master-0 kubenswrapper[26474]: I0223 13:14:46.088638 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:46.088772 master-0 kubenswrapper[26474]: I0223 13:14:46.088747 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.088808 master-0 kubenswrapper[26474]: I0223 13:14:46.088785 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:46.088858 master-0 kubenswrapper[26474]: I0223 13:14:46.088842 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.088887 master-0 kubenswrapper[26474]: I0223 13:14:46.088830 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d878bd-05cd-414e-94c1-a3e9ce637331-serving-cert\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:46.089126 master-0 kubenswrapper[26474]: I0223 13:14:46.088872 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:14:46.089217 master-0 kubenswrapper[26474]: I0223 13:14:46.089185 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.089217 master-0 kubenswrapper[26474]: I0223 13:14:46.089183 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7cadeb05-9298-4bcf-b6f2-659c68eba020-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:46.089280 master-0 kubenswrapper[26474]: I0223 13:14:46.089213 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2acc6d35-5679-4fac-970f-3d2ff954cc33-metrics-tls\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:46.089280 master-0 kubenswrapper[26474]: I0223 13:14:46.089267 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:46.089377 master-0 kubenswrapper[26474]: I0223 13:14:46.089352 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.089430 master-0 kubenswrapper[26474]: I0223 13:14:46.089385 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:46.089430 master-0 kubenswrapper[26474]: I0223 13:14:46.089416 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.089503 master-0 kubenswrapper[26474]: I0223 13:14:46.089439 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.089556 master-0 kubenswrapper[26474]: I0223 13:14:46.089529 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.089592 master-0 kubenswrapper[26474]: I0223 13:14:46.089565 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.089629 master-0 kubenswrapper[26474]: I0223 13:14:46.089599 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:46.089730 master-0 kubenswrapper[26474]: I0223 13:14:46.089703 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:46.089764 master-0 kubenswrapper[26474]: I0223 13:14:46.089732 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.089796 master-0 kubenswrapper[26474]: I0223 13:14:46.089762 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:46.089796 master-0 kubenswrapper[26474]: I0223 13:14:46.089789 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.089858 master-0 kubenswrapper[26474]: I0223 13:14:46.089807 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:46.089858 master-0 kubenswrapper[26474]: I0223 13:14:46.089848 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:46.089934 master-0 kubenswrapper[26474]: I0223 13:14:46.089899 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.090014 master-0 kubenswrapper[26474]: I0223 13:14:46.089985 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:14:46.090059 master-0 kubenswrapper[26474]: I0223 13:14:46.090040 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.090102 master-0 kubenswrapper[26474]: I0223 13:14:46.090083 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.090132 master-0 kubenswrapper[26474]: I0223 13:14:46.090101 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:46.090161 master-0 kubenswrapper[26474]: I0223 13:14:46.090133 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.090192 master-0 kubenswrapper[26474]: I0223 13:14:46.090171 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.090224 master-0 kubenswrapper[26474]: I0223 13:14:46.090196 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:46.090252 master-0 kubenswrapper[26474]: I0223 13:14:46.090226 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.090252 master-0 kubenswrapper[26474]: I0223 13:14:46.090214 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/54001c8e-cb57-47dc-8594-9daed4190bda-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-ld22t\" (UID: \"54001c8e-cb57-47dc-8594-9daed4190bda\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:14:46.090309 master-0 kubenswrapper[26474]: I0223 13:14:46.090297 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-config\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.090398 master-0 kubenswrapper[26474]: I0223 13:14:46.090376 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ede583b-44b0-42af-92c9-f7b8938f7843-images\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.090439 master-0 kubenswrapper[26474]: I0223 13:14:46.090392 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:46.090467 master-0 kubenswrapper[26474]: I0223 13:14:46.090449 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.090498 master-0 kubenswrapper[26474]: I0223 13:14:46.090464 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce55de54-8441-4a16-8b57-598042869000-serving-cert\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.090498 master-0 kubenswrapper[26474]: I0223 13:14:46.090486 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.090551 master-0 kubenswrapper[26474]: I0223 13:14:46.090482 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acc6d35-5679-4fac-970f-3d2ff954cc33-config-volume\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:46.090611 master-0 kubenswrapper[26474]: I0223 13:14:46.090570 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:46.090661 master-0 kubenswrapper[26474]: I0223 13:14:46.090639 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:46.090725 master-0 kubenswrapper[26474]: I0223 13:14:46.090699 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:46.090755 master-0 kubenswrapper[26474]: I0223 13:14:46.090738 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.090787 master-0 kubenswrapper[26474]: I0223 13:14:46.090765 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:46.090787 master-0 kubenswrapper[26474]: I0223 13:14:46.090739 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-webhook-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:46.090865 master-0 kubenswrapper[26474]: I0223 13:14:46.090840 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:46.090922 master-0 kubenswrapper[26474]: I0223 13:14:46.090901 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762249c6-b548-4733-8b78-64f73430bfbd-apiservice-cert\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:46.090992 master-0 kubenswrapper[26474]: I0223 13:14:46.090970 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:46.091027 master-0 kubenswrapper[26474]: I0223 13:14:46.091007 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.091057 master-0 kubenswrapper[26474]: I0223 13:14:46.091038 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:46.091087 master-0 kubenswrapper[26474]: I0223 13:14:46.091067 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.091292 master-0 kubenswrapper[26474]: I0223 13:14:46.091260 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.091417 master-0 kubenswrapper[26474]: I0223 13:14:46.091304 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:46.091417 master-0 kubenswrapper[26474]: I0223 13:14:46.091332 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:14:46.091471 master-0 kubenswrapper[26474]: I0223 13:14:46.091451 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.091557 master-0 kubenswrapper[26474]: I0223 13:14:46.091535 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.091591 master-0 kubenswrapper[26474]: I0223 13:14:46.091536 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5ede583b-44b0-42af-92c9-f7b8938f7843-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:46.091652 master-0 kubenswrapper[26474]: I0223 13:14:46.091628 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.091698 master-0 kubenswrapper[26474]: I0223 13:14:46.091677 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:46.091973 master-0 kubenswrapper[26474]: I0223 13:14:46.091943 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/898e6c96-73d5-4dc5-a383-986599a5bcd9-srv-cert\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:46.092325 master-0 kubenswrapper[26474]: I0223 13:14:46.092273 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2ttqf" Feb 23 13:14:46.112729 master-0 kubenswrapper[26474]: I0223 13:14:46.112540 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:14:46.133491 master-0 kubenswrapper[26474]: I0223 13:14:46.133427 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:14:46.141652 master-0 kubenswrapper[26474]: I0223 13:14:46.141574 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-serving-cert\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:46.152297 master-0 kubenswrapper[26474]: I0223 13:14:46.152250 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dndpz" Feb 23 13:14:46.173146 master-0 kubenswrapper[26474]: I0223 13:14:46.173055 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxgbj" Feb 23 13:14:46.192842 master-0 kubenswrapper[26474]: I0223 13:14:46.192777 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:14:46.202006 master-0 kubenswrapper[26474]: I0223 13:14:46.201965 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.212439 master-0 kubenswrapper[26474]: I0223 13:14:46.212227 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:14:46.222901 master-0 kubenswrapper[26474]: I0223 13:14:46.222846 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-default-certificate\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.234132 master-0 kubenswrapper[26474]: I0223 13:14:46.234078 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:14:46.240588 master-0 kubenswrapper[26474]: I0223 13:14:46.240497 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.252560 master-0 kubenswrapper[26474]: I0223 13:14:46.252495 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:14:46.261228 master-0 kubenswrapper[26474]: I0223 13:14:46.261178 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-certs\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:46.274157 master-0 kubenswrapper[26474]: I0223 13:14:46.274091 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q7zn4" Feb 23 13:14:46.293068 master-0 kubenswrapper[26474]: I0223 13:14:46.292912 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:14:46.299706 master-0 kubenswrapper[26474]: I0223 13:14:46.299649 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:46.314433 master-0 kubenswrapper[26474]: I0223 13:14:46.314360 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:14:46.318017 master-0 kubenswrapper[26474]: I0223 13:14:46.317980 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdad149d-da6f-49ac-85e5-deb01f161166-node-bootstrap-token\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:46.335441 master-0 kubenswrapper[26474]: I0223 13:14:46.335391 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:14:46.342975 master-0 kubenswrapper[26474]: I0223 13:14:46.342888 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-metrics-certs\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.352460 master-0 kubenswrapper[26474]: I0223 13:14:46.352372 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:14:46.360212 master-0 kubenswrapper[26474]: I0223 13:14:46.360161 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73ba4f16-0217-4bf1-8fc2-6b385eda0771-stats-auth\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.373090 master-0 kubenswrapper[26474]: I0223 13:14:46.373050 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-f8dtp" Feb 23 13:14:46.394504 master-0 kubenswrapper[26474]: I0223 13:14:46.394236 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:14:46.412798 master-0 kubenswrapper[26474]: I0223 13:14:46.412645 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:14:46.419936 master-0 kubenswrapper[26474]: I0223 13:14:46.419882 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-proxy-tls\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:46.443494 master-0 kubenswrapper[26474]: I0223 13:14:46.439601 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:14:46.444326 master-0 kubenswrapper[26474]: I0223 13:14:46.444052 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.451097 master-0 kubenswrapper[26474]: I0223 13:14:46.450912 26474 request.go:700] Waited for 2.011839904s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dmetrics-server-639sbo1a4as7e&limit=500&resourceVersion=0 Feb 23 13:14:46.457813 master-0 kubenswrapper[26474]: I0223 13:14:46.457742 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-639sbo1a4as7e" Feb 23 13:14:46.461671 master-0 kubenswrapper[26474]: I0223 13:14:46.461612 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.475925 master-0 kubenswrapper[26474]: I0223 13:14:46.475680 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:14:46.478408 master-0 kubenswrapper[26474]: I0223 13:14:46.478327 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ba4f16-0217-4bf1-8fc2-6b385eda0771-service-ca-bundle\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:46.492807 master-0 kubenswrapper[26474]: I0223 13:14:46.492751 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:14:46.501005 master-0 kubenswrapper[26474]: I0223 13:14:46.500941 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.512702 master-0 kubenswrapper[26474]: I0223 13:14:46.512647 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zjvhw" Feb 23 13:14:46.533491 master-0 kubenswrapper[26474]: I0223 13:14:46.533316 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:14:46.552664 master-0 kubenswrapper[26474]: I0223 13:14:46.552547 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:14:46.573488 master-0 kubenswrapper[26474]: I0223 13:14:46.573305 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6vfhg" Feb 23 13:14:46.593654 master-0 kubenswrapper[26474]: I0223 13:14:46.593581 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:14:46.601737 master-0 kubenswrapper[26474]: I0223 13:14:46.601684 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-images\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:46.621555 master-0 kubenswrapper[26474]: I0223 13:14:46.621512 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:14:46.629688 master-0 kubenswrapper[26474]: I0223 13:14:46.629605 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.633140 master-0 kubenswrapper[26474]: I0223 13:14:46.633086 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:14:46.652812 master-0 kubenswrapper[26474]: I0223 13:14:46.652761 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:14:46.661080 master-0 kubenswrapper[26474]: I0223 13:14:46.661047 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce55de54-8441-4a16-8b57-598042869000-service-ca-bundle\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:46.673204 master-0 kubenswrapper[26474]: I0223 13:14:46.673170 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:14:46.682581 master-0 kubenswrapper[26474]: I0223 13:14:46.682541 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-tls\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:46.692894 master-0 kubenswrapper[26474]: I0223 13:14:46.692819 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:14:46.701155 master-0 kubenswrapper[26474]: I0223 13:14:46.701080 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8b0e50-59ee-44a9-9a66-8febb833b771-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:46.712194 master-0 kubenswrapper[26474]: I0223 13:14:46.712146 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:14:46.720072 master-0 kubenswrapper[26474]: I0223 13:14:46.719895 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:14:46.732909 master-0 kubenswrapper[26474]: I0223 13:14:46.732840 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zwkpp" Feb 23 13:14:46.753240 master-0 kubenswrapper[26474]: I0223 13:14:46.753175 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:14:46.765525 master-0 kubenswrapper[26474]: I0223 13:14:46.764745 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5802841-52dc-4d15-a252-0eac70e9fbbc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:14:46.772526 master-0 kubenswrapper[26474]: I0223 13:14:46.772453 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:14:46.793594 master-0 kubenswrapper[26474]: I0223 13:14:46.793490 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:14:46.812812 master-0 kubenswrapper[26474]: I0223 13:14:46.812599 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:14:46.820481 master-0 kubenswrapper[26474]: I0223 13:14:46.820335 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.832537 master-0 kubenswrapper[26474]: I0223 13:14:46.832450 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:14:46.840238 master-0 kubenswrapper[26474]: I0223 13:14:46.840174 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:46.853016 master-0 kubenswrapper[26474]: I0223 13:14:46.852662 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:14:46.862596 master-0 kubenswrapper[26474]: I0223 13:14:46.862512 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf57b864-25d7-4420-9052-04dd580a9f7d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:46.873175 master-0 kubenswrapper[26474]: I0223 13:14:46.873067 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8d884" Feb 23 13:14:46.893760 master-0 kubenswrapper[26474]: I0223 13:14:46.893650 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:14:46.900760 master-0 kubenswrapper[26474]: I0223 13:14:46.900660 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-client\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.913995 master-0 kubenswrapper[26474]: I0223 13:14:46.913926 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:14:46.920121 master-0 kubenswrapper[26474]: I0223 13:14:46.920046 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-serving-cert\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.932604 master-0 kubenswrapper[26474]: I0223 13:14:46.932547 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:14:46.942310 master-0 kubenswrapper[26474]: I0223 13:14:46.942227 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf57b864-25d7-4420-9052-04dd580a9f7d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:46.952317 master-0 kubenswrapper[26474]: I0223 13:14:46.952246 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:14:46.961436 master-0 kubenswrapper[26474]: I0223 13:14:46.961369 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-etcd-serving-ca\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.972485 master-0 kubenswrapper[26474]: I0223 13:14:46.972411 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:14:46.981269 master-0 kubenswrapper[26474]: I0223 13:14:46.981187 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-trusted-ca-bundle\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:46.992545 master-0 kubenswrapper[26474]: I0223 13:14:46.992467 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:14:47.003143 master-0 kubenswrapper[26474]: I0223 13:14:47.003048 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77ea2b54-bcc2-4c4e-9415-03984721b5b1-audit-policies\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:47.013115 master-0 kubenswrapper[26474]: I0223 13:14:47.013005 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:14:47.031813 master-0 kubenswrapper[26474]: I0223 13:14:47.031729 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nhgb8" Feb 23 13:14:47.054145 master-0 kubenswrapper[26474]: I0223 13:14:47.054072 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:14:47.072628 master-0 kubenswrapper[26474]: I0223 13:14:47.072453 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:14:47.082695 master-0 kubenswrapper[26474]: I0223 13:14:47.082628 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:47.088995 master-0 kubenswrapper[26474]: E0223 13:14:47.088760 26474 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.088995 master-0 kubenswrapper[26474]: E0223 13:14:47.088898 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls podName:3ccbaed9-ab28-47c0-a585-648b9251fd11 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.088876199 +0000 UTC m=+9.935383876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-p6hj2" (UID: "3ccbaed9-ab28-47c0-a585-648b9251fd11") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.089699 master-0 kubenswrapper[26474]: E0223 13:14:47.089554 26474 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.089699 master-0 kubenswrapper[26474]: E0223 13:14:47.089658 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config podName:3ccbaed9-ab28-47c0-a585-648b9251fd11 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.089628787 +0000 UTC m=+9.936136674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-p6hj2" (UID: "3ccbaed9-ab28-47c0-a585-648b9251fd11") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.089950 master-0 kubenswrapper[26474]: E0223 13:14:47.089730 26474 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:47.089950 master-0 kubenswrapper[26474]: E0223 13:14:47.089762 26474 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.090065 master-0 kubenswrapper[26474]: E0223 13:14:47.090036 26474 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:47.090065 master-0 kubenswrapper[26474]: E0223 13:14:47.090050 26474 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.090154 master-0 kubenswrapper[26474]: E0223 13:14:47.089778 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config podName:0d134032-1c35-4b69-9336-bcdc9c1cb87d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.08976659 +0000 UTC m=+9.936274517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config") pod "machine-approver-7dd9c7d7b9-z7jgz" (UID: "0d134032-1c35-4b69-9336-bcdc9c1cb87d") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:47.090232 master-0 kubenswrapper[26474]: E0223 13:14:47.090200 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls podName:0d134032-1c35-4b69-9336-bcdc9c1cb87d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.090153539 +0000 UTC m=+9.936661256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-z7jgz" (UID: "0d134032-1c35-4b69-9336-bcdc9c1cb87d") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.090298 master-0 kubenswrapper[26474]: E0223 13:14:47.090250 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca podName:945907dd-f6b3-400f-b539-e1310eb11dd7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.090233752 +0000 UTC m=+9.936741469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-87hx7" (UID: "945907dd-f6b3-400f-b539-e1310eb11dd7") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:14:47.090298 master-0 kubenswrapper[26474]: E0223 13:14:47.090283 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls podName:57803492-e1dd-4994-8330-1e9b393d54fd nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.090269113 +0000 UTC m=+9.936776830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls") pod "machine-config-daemon-q8bjq" (UID: "57803492-e1dd-4994-8330-1e9b393d54fd") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.091661 master-0 kubenswrapper[26474]: E0223 13:14:47.091595 26474 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.091838 master-0 kubenswrapper[26474]: E0223 13:14:47.091798 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config podName:77ea2b54-bcc2-4c4e-9415-03984721b5b1 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.091700638 +0000 UTC m=+9.938208325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config") pod "apiserver-6c65bdd8f8-vblb2" (UID: "77ea2b54-bcc2-4c4e-9415-03984721b5b1") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.091838 master-0 kubenswrapper[26474]: E0223 13:14:47.091799 26474 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.091954 master-0 kubenswrapper[26474]: E0223 13:14:47.091900 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert podName:945907dd-f6b3-400f-b539-e1310eb11dd7 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:48.091885882 +0000 UTC m=+9.938393569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-87hx7" (UID: "945907dd-f6b3-400f-b539-e1310eb11dd7") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:14:47.092796 master-0 kubenswrapper[26474]: I0223 13:14:47.092755 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:14:47.111653 master-0 kubenswrapper[26474]: I0223 13:14:47.111555 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:14:47.132854 master-0 kubenswrapper[26474]: I0223 13:14:47.132777 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nhhd2" Feb 23 13:14:47.152671 master-0 kubenswrapper[26474]: I0223 13:14:47.152578 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-qqvc9" Feb 23 13:14:47.173107 master-0 kubenswrapper[26474]: I0223 13:14:47.173031 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:14:47.192530 master-0 kubenswrapper[26474]: I0223 13:14:47.192461 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:14:47.212925 master-0 kubenswrapper[26474]: I0223 13:14:47.212844 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vqpkz" Feb 23 13:14:47.232315 master-0 kubenswrapper[26474]: I0223 13:14:47.232234 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:14:47.251740 master-0 kubenswrapper[26474]: I0223 13:14:47.251686 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:14:47.272289 master-0 kubenswrapper[26474]: I0223 13:14:47.272227 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:14:47.295474 master-0 kubenswrapper[26474]: I0223 13:14:47.293868 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wgs7j" Feb 23 13:14:47.313617 master-0 kubenswrapper[26474]: I0223 13:14:47.313536 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-gm6kx" Feb 23 13:14:47.341895 master-0 kubenswrapper[26474]: I0223 13:14:47.341761 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:14:47.352968 master-0 kubenswrapper[26474]: I0223 13:14:47.352865 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:14:47.373294 master-0 kubenswrapper[26474]: I0223 13:14:47.373181 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:14:47.406404 master-0 kubenswrapper[26474]: E0223 13:14:47.406301 26474 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.003s" Feb 23 13:14:47.412083 master-0 kubenswrapper[26474]: I0223 13:14:47.412022 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:14:47.422893 master-0 kubenswrapper[26474]: I0223 13:14:47.420664 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:14:47.433275 master-0 kubenswrapper[26474]: I0223 13:14:47.433178 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:14:47.451376 master-0 kubenswrapper[26474]: I0223 13:14:47.451296 26474 request.go:700] Waited for 2.974314883s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/nodes Feb 23 13:14:47.496186 master-0 kubenswrapper[26474]: I0223 13:14:47.496113 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b825\" (UniqueName: \"kubernetes.io/projected/f2c50f9a-8c73-4cb9-9cbf-2565496212a6-kube-api-access-4b825\") pod \"service-ca-576b4d78bd-9pltw\" (UID: \"f2c50f9a-8c73-4cb9-9cbf-2565496212a6\") " pod="openshift-service-ca/service-ca-576b4d78bd-9pltw" Feb 23 13:14:47.505178 master-0 kubenswrapper[26474]: I0223 13:14:47.505119 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"metrics-server-69f7f878d4-746vx\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:14:47.528336 master-0 kubenswrapper[26474]: I0223 13:14:47.528264 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjcw\" (UniqueName: \"kubernetes.io/projected/898e6c96-73d5-4dc5-a383-986599a5bcd9-kube-api-access-znjcw\") pod \"catalog-operator-596f79dd6f-jbhn6\" (UID: \"898e6c96-73d5-4dc5-a383-986599a5bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:47.568323 master-0 kubenswrapper[26474]: I0223 13:14:47.568224 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-bound-sa-token\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:47.569631 master-0 kubenswrapper[26474]: I0223 13:14:47.569583 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbhv\" (UniqueName: \"kubernetes.io/projected/90a694bb-fe3e-4478-bbb4-d2be9cd4c57f-kube-api-access-mhbhv\") pod \"openshift-config-operator-6f47d587d6-8wrb6\" (UID: \"90a694bb-fe3e-4478-bbb4-d2be9cd4c57f\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:47.583109 master-0 kubenswrapper[26474]: I0223 13:14:47.583066 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdfs\" (UniqueName: \"kubernetes.io/projected/d48d286d-4f37-4027-86cd-1580e6076613-kube-api-access-fzdfs\") pod \"multus-6lk7x\" (UID: \"d48d286d-4f37-4027-86cd-1580e6076613\") " pod="openshift-multus/multus-6lk7x" Feb 23 13:14:47.607471 master-0 kubenswrapper[26474]: I0223 13:14:47.607316 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplcg\" (UniqueName: \"kubernetes.io/projected/7d0a976c-1492-4989-a5ff-e386564dd6ba-kube-api-access-wplcg\") pod \"openshift-apiserver-operator-8586dccc9b-zh69g\" (UID: \"7d0a976c-1492-4989-a5ff-e386564dd6ba\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-zh69g" Feb 23 13:14:47.627816 master-0 kubenswrapper[26474]: I0223 13:14:47.627721 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cksnd\" (UniqueName: \"kubernetes.io/projected/77ea2b54-bcc2-4c4e-9415-03984721b5b1-kube-api-access-cksnd\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:47.657778 master-0 kubenswrapper[26474]: I0223 13:14:47.657697 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mkf\" (UniqueName: \"kubernetes.io/projected/540b41b0-f574-46b9-8b2f-19e90ad5d0ce-kube-api-access-f4mkf\") pod \"ovnkube-node-qz8dt\" (UID: \"540b41b0-f574-46b9-8b2f-19e90ad5d0ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:47.671059 master-0 kubenswrapper[26474]: I0223 13:14:47.670960 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7sfw\" (UniqueName: \"kubernetes.io/projected/d7c80f4d-6b28-44f4-beef-01e705260452-kube-api-access-d7sfw\") pod \"ovnkube-control-plane-5d8dfcdc87-7hpbz\" (UID: \"d7c80f4d-6b28-44f4-beef-01e705260452\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7hpbz" Feb 23 13:14:47.691042 master-0 kubenswrapper[26474]: I0223 13:14:47.690941 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7b4r\" (UniqueName: \"kubernetes.io/projected/5ede583b-44b0-42af-92c9-f7b8938f7843-kube-api-access-p7b4r\") pod \"cluster-baremetal-operator-d6bb9bb76-vfkqg\" (UID: \"5ede583b-44b0-42af-92c9-f7b8938f7843\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-vfkqg" Feb 23 13:14:47.708573 master-0 kubenswrapper[26474]: I0223 13:14:47.708453 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgnr\" (UniqueName: \"kubernetes.io/projected/bdad149d-da6f-49ac-85e5-deb01f161166-kube-api-access-llgnr\") pod \"machine-config-server-97rhg\" (UID: \"bdad149d-da6f-49ac-85e5-deb01f161166\") " pod="openshift-machine-config-operator/machine-config-server-97rhg" Feb 23 13:14:47.730029 master-0 kubenswrapper[26474]: I0223 13:14:47.729915 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6q5\" (UniqueName: \"kubernetes.io/projected/0d7c1ea0-e3c1-4494-bb27-058200b93ed7-kube-api-access-8j6q5\") pod \"network-operator-7d7db75979-q7q5x\" (UID: \"0d7c1ea0-e3c1-4494-bb27-058200b93ed7\") " pod="openshift-network-operator/network-operator-7d7db75979-q7q5x" Feb 23 13:14:47.765472 master-0 kubenswrapper[26474]: I0223 13:14:47.760738 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mhb\" (UniqueName: \"kubernetes.io/projected/35e97ed9-695d-483e-8878-4f231c79f1d2-kube-api-access-p2mhb\") pod \"marketplace-operator-6f5488b997-588zk\" (UID: \"35e97ed9-695d-483e-8878-4f231c79f1d2\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:47.776431 master-0 kubenswrapper[26474]: I0223 13:14:47.776276 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9lvg\" (UniqueName: \"kubernetes.io/projected/b8bdbf92-61e3-41e9-a48d-4259cee80e9f-kube-api-access-t9lvg\") pod \"iptables-alerter-qg27h\" (UID: \"b8bdbf92-61e3-41e9-a48d-4259cee80e9f\") " pod="openshift-network-operator/iptables-alerter-qg27h" Feb 23 13:14:47.797876 master-0 kubenswrapper[26474]: I0223 13:14:47.797764 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"controller-manager-69f44bb786-4zj6n\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:47.809758 master-0 kubenswrapper[26474]: I0223 13:14:47.809671 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmqj\" (UniqueName: \"kubernetes.io/projected/e941c759-ab95-4b30-a571-6c132ab0e639-kube-api-access-nnmqj\") pod \"network-metrics-daemon-bbrcr\" (UID: \"e941c759-ab95-4b30-a571-6c132ab0e639\") " pod="openshift-multus/network-metrics-daemon-bbrcr" Feb 23 13:14:47.840667 master-0 kubenswrapper[26474]: I0223 13:14:47.840553 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3daf0176-92e7-4642-8643-4afbefb77235-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-cz4nt\" (UID: \"3daf0176-92e7-4642-8643-4afbefb77235\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-cz4nt" Feb 23 13:14:47.849229 master-0 kubenswrapper[26474]: I0223 13:14:47.849135 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjfj\" (UniqueName: \"kubernetes.io/projected/06ccd378-23ee-49b7-a435-4b01de772155-kube-api-access-7cjfj\") pod \"machine-config-controller-54cb48566c-kxlfr\" (UID: \"06ccd378-23ee-49b7-a435-4b01de772155\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-kxlfr" Feb 23 13:14:47.873061 master-0 kubenswrapper[26474]: I0223 13:14:47.872893 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6s7\" (UniqueName: \"kubernetes.io/projected/18386753-ec74-456d-838d-98c07c169b4b-kube-api-access-9d6s7\") pod \"network-node-identity-zr6kv\" (UID: \"18386753-ec74-456d-838d-98c07c169b4b\") " pod="openshift-network-node-identity/network-node-identity-zr6kv" Feb 23 13:14:47.886208 master-0 kubenswrapper[26474]: I0223 13:14:47.886131 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbk8g\" (UniqueName: \"kubernetes.io/projected/945907dd-f6b3-400f-b539-e1310eb11dd7-kube-api-access-wbk8g\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:47.904643 master-0 kubenswrapper[26474]: I0223 13:14:47.904566 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6d4r\" (UniqueName: \"kubernetes.io/projected/8422281d-af45-4f17-8f15-ac3fd9da4bbc-kube-api-access-d6d4r\") pod \"tuned-mjpd9\" (UID: \"8422281d-af45-4f17-8f15-ac3fd9da4bbc\") " pod="openshift-cluster-node-tuning-operator/tuned-mjpd9" Feb 23 13:14:47.929840 master-0 kubenswrapper[26474]: I0223 13:14:47.929763 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjkkc\" (UniqueName: \"kubernetes.io/projected/0d134032-1c35-4b69-9336-bcdc9c1cb87d-kube-api-access-wjkkc\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:47.950979 master-0 kubenswrapper[26474]: I0223 13:14:47.950890 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqzj\" (UniqueName: \"kubernetes.io/projected/ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9-kube-api-access-zcqzj\") pod \"service-ca-operator-c48c8bf7c-mvkrz\" (UID: \"ba96760d-c6aa-4d7d-be5d-9a7e7cb549c9\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mvkrz" Feb 23 13:14:47.970913 master-0 kubenswrapper[26474]: I0223 13:14:47.970831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpnzd\" (UniqueName: \"kubernetes.io/projected/b12352eb-04d7-4419-b1bf-d08bca9da599-kube-api-access-cpnzd\") pod \"network-check-source-58fb6744f5-b7dr8\" (UID: \"b12352eb-04d7-4419-b1bf-d08bca9da599\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-b7dr8" Feb 23 13:14:47.989161 master-0 kubenswrapper[26474]: I0223 13:14:47.989051 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9l8\" (UniqueName: \"kubernetes.io/projected/5b459832-b875-49a6-a7c3-253fa6c8e45a-kube-api-access-wg9l8\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf\" (UID: \"5b459832-b875-49a6-a7c3-253fa6c8e45a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf" Feb 23 13:14:48.009494 master-0 kubenswrapper[26474]: I0223 13:14:48.009416 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsdx\" (UniqueName: \"kubernetes.io/projected/e6f93af9-bdbb-4319-8ddb-e5458e8a9275-kube-api-access-2fsdx\") pod \"package-server-manager-5c75f78c8b-lqc9w\" (UID: \"e6f93af9-bdbb-4319-8ddb-e5458e8a9275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:48.036160 master-0 kubenswrapper[26474]: I0223 13:14:48.036067 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wr82\" (UniqueName: \"kubernetes.io/projected/f348bffa-b2f6-4695-88a7-923625e7fb02-kube-api-access-5wr82\") pod \"authentication-operator-5bd7c86784-rlbcj\" (UID: \"f348bffa-b2f6-4695-88a7-923625e7fb02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-rlbcj" Feb 23 13:14:48.049497 master-0 kubenswrapper[26474]: I0223 13:14:48.049397 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xrl\" (UniqueName: \"kubernetes.io/projected/878aa813-a8b9-4a6f-8086-778df276d0d7-kube-api-access-58xrl\") pod \"ingress-operator-6569778c84-k9h69\" (UID: \"878aa813-a8b9-4a6f-8086-778df276d0d7\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" Feb 23 13:14:48.067145 master-0 kubenswrapper[26474]: I0223 13:14:48.067070 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmcjv\" (UniqueName: \"kubernetes.io/projected/9e0e3072-a35c-4404-891c-f31fafd0b4b1-kube-api-access-rmcjv\") pod \"redhat-marketplace-vwhpv\" (UID: \"9e0e3072-a35c-4404-891c-f31fafd0b4b1\") " pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:48.092219 master-0 kubenswrapper[26474]: I0223 13:14:48.092143 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z8xh\" (UniqueName: \"kubernetes.io/projected/affc63b7-db45-429d-82ff-e50f6aae51dc-kube-api-access-5z8xh\") pod \"cluster-storage-operator-f94476f49-gdvlh\" (UID: \"affc63b7-db45-429d-82ff-e50f6aae51dc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-gdvlh" Feb 23 13:14:48.110920 master-0 kubenswrapper[26474]: I0223 13:14:48.110852 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq2x\" (UniqueName: \"kubernetes.io/projected/47dedc5d-1288-4020-b481-5dca68a7d437-kube-api-access-hhq2x\") pod \"machine-api-operator-5c7cf458b4-nm845\" (UID: \"47dedc5d-1288-4020-b481-5dca68a7d437\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-nm845" Feb 23 13:14:48.133389 master-0 kubenswrapper[26474]: I0223 13:14:48.133219 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4vh\" (UniqueName: \"kubernetes.io/projected/0d58817c-970f-47b1-a5a5-a491f3e93426-kube-api-access-gt4vh\") pod \"cluster-node-tuning-operator-bcf775fc9-sj5wd\" (UID: \"0d58817c-970f-47b1-a5a5-a491f3e93426\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-sj5wd" Feb 23 13:14:48.143656 master-0 kubenswrapper[26474]: I0223 13:14:48.143534 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:48.143942 master-0 kubenswrapper[26474]: I0223 13:14:48.143859 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:48.144084 master-0 kubenswrapper[26474]: I0223 13:14:48.143879 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77ea2b54-bcc2-4c4e-9415-03984721b5b1-encryption-config\") pod \"apiserver-6c65bdd8f8-vblb2\" (UID: \"77ea2b54-bcc2-4c4e-9415-03984721b5b1\") " pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:48.144409 master-0 kubenswrapper[26474]: I0223 13:14:48.144368 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:48.144465 master-0 kubenswrapper[26474]: I0223 13:14:48.144447 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64s6\" (UniqueName: \"kubernetes.io/projected/6dc83a57-34c5-4c64-97d3-b6191ba690eb-kube-api-access-b64s6\") pod \"node-resolver-rxc8b\" (UID: \"6dc83a57-34c5-4c64-97d3-b6191ba690eb\") " pod="openshift-dns/node-resolver-rxc8b" Feb 23 13:14:48.144670 master-0 kubenswrapper[26474]: I0223 13:14:48.144618 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:48.144670 master-0 kubenswrapper[26474]: I0223 13:14:48.144662 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:48.144778 master-0 kubenswrapper[26474]: I0223 13:14:48.144733 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:48.144964 master-0 kubenswrapper[26474]: I0223 13:14:48.144930 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3ccbaed9-ab28-47c0-a585-648b9251fd11-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:48.145131 master-0 kubenswrapper[26474]: I0223 13:14:48.145073 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:48.145417 master-0 kubenswrapper[26474]: I0223 13:14:48.145384 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:48.145417 master-0 kubenswrapper[26474]: I0223 13:14:48.145409 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d134032-1c35-4b69-9336-bcdc9c1cb87d-config\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:48.145522 master-0 kubenswrapper[26474]: I0223 13:14:48.145431 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d134032-1c35-4b69-9336-bcdc9c1cb87d-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-z7jgz\" (UID: \"0d134032-1c35-4b69-9336-bcdc9c1cb87d\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-z7jgz" Feb 23 13:14:48.145603 master-0 kubenswrapper[26474]: I0223 13:14:48.145571 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:48.145653 master-0 kubenswrapper[26474]: I0223 13:14:48.145604 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57803492-e1dd-4994-8330-1e9b393d54fd-proxy-tls\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:48.145939 master-0 kubenswrapper[26474]: I0223 13:14:48.145903 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/945907dd-f6b3-400f-b539-e1310eb11dd7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:48.146022 master-0 kubenswrapper[26474]: I0223 13:14:48.145987 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/945907dd-f6b3-400f-b539-e1310eb11dd7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-87hx7\" (UID: \"945907dd-f6b3-400f-b539-e1310eb11dd7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-87hx7" Feb 23 13:14:48.169981 master-0 kubenswrapper[26474]: I0223 13:14:48.169874 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrjc\" (UniqueName: \"kubernetes.io/projected/f81886b9-fcd3-4666-b550-0688072210f7-kube-api-access-tmrjc\") pod \"network-check-target-rnz52\" (UID: \"f81886b9-fcd3-4666-b550-0688072210f7\") " pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:14:48.188152 master-0 kubenswrapper[26474]: I0223 13:14:48.188064 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-cj6hr\" (UID: \"4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-cj6hr" Feb 23 13:14:48.206001 master-0 kubenswrapper[26474]: I0223 13:14:48.205910 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8c76\" (UniqueName: \"kubernetes.io/projected/ae8b0e50-59ee-44a9-9a66-8febb833b771-kube-api-access-n8c76\") pod \"node-exporter-tv6s2\" (UID: \"ae8b0e50-59ee-44a9-9a66-8febb833b771\") " pod="openshift-monitoring/node-exporter-tv6s2" Feb 23 13:14:48.230257 master-0 kubenswrapper[26474]: I0223 13:14:48.230181 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrjc\" (UniqueName: \"kubernetes.io/projected/29126ab2-a689-4b0e-a1f4-4faed19b0fbc-kube-api-access-nwrjc\") pod \"cluster-olm-operator-5bd7768f54-qgl9z\" (UID: \"29126ab2-a689-4b0e-a1f4-4faed19b0fbc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qgl9z" Feb 23 13:14:48.247664 master-0 kubenswrapper[26474]: I0223 13:14:48.247562 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6b4v\" (UniqueName: \"kubernetes.io/projected/99f14e64-228f-4b9e-991f-ee398fe7bb8a-kube-api-access-p6b4v\") pod \"multus-additional-cni-plugins-srlm4\" (UID: \"99f14e64-228f-4b9e-991f-ee398fe7bb8a\") " pod="openshift-multus/multus-additional-cni-plugins-srlm4" Feb 23 13:14:48.270362 master-0 kubenswrapper[26474]: I0223 13:14:48.270282 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zks\" (UniqueName: \"kubernetes.io/projected/8a544f5a-06b6-4297-a845-d81e9ab9ece7-kube-api-access-t5zks\") pod \"migrator-5c85bff57-xzh2g\" (UID: \"8a544f5a-06b6-4297-a845-d81e9ab9ece7\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xzh2g" Feb 23 13:14:48.290852 master-0 kubenswrapper[26474]: I0223 13:14:48.290751 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78mm\" (UniqueName: \"kubernetes.io/projected/3ccbaed9-ab28-47c0-a585-648b9251fd11-kube-api-access-q78mm\") pod \"openshift-state-metrics-6dbff8cb4c-p6hj2\" (UID: \"3ccbaed9-ab28-47c0-a585-648b9251fd11\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-p6hj2" Feb 23 13:14:48.305200 master-0 kubenswrapper[26474]: I0223 13:14:48.305128 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndf8h\" (UniqueName: \"kubernetes.io/projected/d76d5e5a-3009-42c9-b981-e6ddfa3ba13e-kube-api-access-ndf8h\") pod \"multus-admission-controller-5f54bf67d4-s2n8d\" (UID: \"d76d5e5a-3009-42c9-b981-e6ddfa3ba13e\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-s2n8d" Feb 23 13:14:48.330216 master-0 kubenswrapper[26474]: I0223 13:14:48.330146 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxv7\" (UniqueName: \"kubernetes.io/projected/71cb2f21-6d27-411f-9c2f-d5fa286895a7-kube-api-access-wkxv7\") pod \"kube-storage-version-migrator-operator-fc889cfd5-qvb45\" (UID: \"71cb2f21-6d27-411f-9c2f-d5fa286895a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-qvb45" Feb 23 13:14:48.350863 master-0 kubenswrapper[26474]: I0223 13:14:48.350788 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght2z\" (UniqueName: \"kubernetes.io/projected/7cadeb05-9298-4bcf-b6f2-659c68eba020-kube-api-access-ght2z\") pod \"olm-operator-5499d7f7bb-c7hng\" (UID: \"7cadeb05-9298-4bcf-b6f2-659c68eba020\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:48.369021 master-0 kubenswrapper[26474]: I0223 13:14:48.368944 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xp47\" (UniqueName: \"kubernetes.io/projected/e96ce488-0099-43de-9933-425b7c981055-kube-api-access-7xp47\") pod \"redhat-operators-zrtmg\" (UID: \"e96ce488-0099-43de-9933-425b7c981055\") " pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:48.398069 master-0 kubenswrapper[26474]: I0223 13:14:48.397901 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22p85\" (UniqueName: \"kubernetes.io/projected/9ea16701-bd22-4fc0-90ea-f114b52574f8-kube-api-access-22p85\") pod \"kube-state-metrics-59584d565f-r66qv\" (UID: \"9ea16701-bd22-4fc0-90ea-f114b52574f8\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-r66qv" Feb 23 13:14:48.410488 master-0 kubenswrapper[26474]: I0223 13:14:48.410451 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xw4\" (UniqueName: \"kubernetes.io/projected/6ff7868e-f0d3-4c63-901f-fed11d623cf1-kube-api-access-r6xw4\") pod \"operator-controller-controller-manager-9cc7d7bb-ql2nl\" (UID: \"6ff7868e-f0d3-4c63-901f-fed11d623cf1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:48.429399 master-0 kubenswrapper[26474]: I0223 13:14:48.429324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d878bd-05cd-414e-94c1-a3e9ce637331-kube-api-access\") pod \"cluster-version-operator-57476485-8jbxf\" (UID: \"24d878bd-05cd-414e-94c1-a3e9ce637331\") " pod="openshift-cluster-version/cluster-version-operator-57476485-8jbxf" Feb 23 13:14:48.456754 master-0 kubenswrapper[26474]: I0223 13:14:48.456700 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l66s\" (UniqueName: \"kubernetes.io/projected/73ba4f16-0217-4bf1-8fc2-6b385eda0771-kube-api-access-7l66s\") pod \"router-default-7b65dc9fcb-kcfgf\" (UID: \"73ba4f16-0217-4bf1-8fc2-6b385eda0771\") " pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:48.464369 master-0 kubenswrapper[26474]: I0223 13:14:48.464324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2gm\" (UniqueName: \"kubernetes.io/projected/57803492-e1dd-4994-8330-1e9b393d54fd-kube-api-access-vg2gm\") pod \"machine-config-daemon-q8bjq\" (UID: \"57803492-e1dd-4994-8330-1e9b393d54fd\") " pod="openshift-machine-config-operator/machine-config-daemon-q8bjq" Feb 23 13:14:48.471153 master-0 kubenswrapper[26474]: I0223 13:14:48.471122 26474 request.go:700] Waited for 3.95913446s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token Feb 23 13:14:48.485508 master-0 kubenswrapper[26474]: I0223 13:14:48.485469 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftvv\" (UniqueName: \"kubernetes.io/projected/fce9f67d-0b27-41e3-ba4c-ed9cca25703e-kube-api-access-jftvv\") pod \"catalogd-controller-manager-84b8d9d697-cqmh7\" (UID: \"fce9f67d-0b27-41e3-ba4c-ed9cca25703e\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:48.508665 master-0 kubenswrapper[26474]: I0223 13:14:48.508621 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfl9v\" (UniqueName: \"kubernetes.io/projected/8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3-kube-api-access-wfl9v\") pod \"certified-operators-vnmk2\" (UID: \"8ac0e47e-8ae9-44f6-87d5-e3b78d5813a3\") " pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:48.537528 master-0 kubenswrapper[26474]: I0223 13:14:48.537434 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdbct\" (UniqueName: \"kubernetes.io/projected/bf57b864-25d7-4420-9052-04dd580a9f7d-kube-api-access-bdbct\") pod \"cluster-autoscaler-operator-86b8dc6d6-xljfn\" (UID: \"bf57b864-25d7-4420-9052-04dd580a9f7d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-xljfn" Feb 23 13:14:48.562802 master-0 kubenswrapper[26474]: I0223 13:14:48.562695 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvg7b\" (UniqueName: \"kubernetes.io/projected/e5802841-52dc-4d15-a252-0eac70e9fbbc-kube-api-access-nvg7b\") pod \"control-plane-machine-set-operator-686847ff5f-pqjsm\" (UID: \"e5802841-52dc-4d15-a252-0eac70e9fbbc\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-pqjsm" Feb 23 13:14:48.573560 master-0 kubenswrapper[26474]: I0223 13:14:48.573455 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znzzv\" (UniqueName: \"kubernetes.io/projected/7cf2e1eb-fb95-4401-9112-57aee9ebe1e6-kube-api-access-znzzv\") pod \"cluster-samples-operator-65c5c48b9b-6dzlv\" (UID: \"7cf2e1eb-fb95-4401-9112-57aee9ebe1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-6dzlv" Feb 23 13:14:48.583815 master-0 kubenswrapper[26474]: I0223 13:14:48.583754 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn9mt\" (UniqueName: \"kubernetes.io/projected/a663ecaf-ced2-4c7d-91c8-44e94851f7d6-kube-api-access-nn9mt\") pod \"machine-config-operator-7f8c75f984-522th\" (UID: \"a663ecaf-ced2-4c7d-91c8-44e94851f7d6\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-522th" Feb 23 13:14:48.609205 master-0 kubenswrapper[26474]: I0223 13:14:48.609072 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxjf\" (UniqueName: \"kubernetes.io/projected/762249c6-b548-4733-8b78-64f73430bfbd-kube-api-access-mfxjf\") pod \"packageserver-544948f94-sshjz\" (UID: \"762249c6-b548-4733-8b78-64f73430bfbd\") " pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:48.631133 master-0 kubenswrapper[26474]: I0223 13:14:48.631034 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j744d\" (UniqueName: \"kubernetes.io/projected/3a5284f9-cbb7-400b-ab39-bfef60ec198b-kube-api-access-j744d\") pod \"community-operators-w7wq9\" (UID: \"3a5284f9-cbb7-400b-ab39-bfef60ec198b\") " pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:48.641278 master-0 kubenswrapper[26474]: I0223 13:14:48.641187 26474 scope.go:117] "RemoveContainer" containerID="cf7e22147b726d7bb900d92e5a79955383f2346325db290ec3e45f21c5be3266" Feb 23 13:14:48.655947 master-0 kubenswrapper[26474]: I0223 13:14:48.655736 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dcr\" (UniqueName: \"kubernetes.io/projected/5793184d-de96-49ad-a060-0fa0cf278a9c-kube-api-access-v9dcr\") pod \"csi-snapshot-controller-6847bb4785-zw4nq\" (UID: \"5793184d-de96-49ad-a060-0fa0cf278a9c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-zw4nq" Feb 23 13:14:48.667696 master-0 kubenswrapper[26474]: I0223 13:14:48.667587 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6cl\" (UniqueName: \"kubernetes.io/projected/2acc6d35-5679-4fac-970f-3d2ff954cc33-kube-api-access-kc6cl\") pod \"dns-default-ljphn\" (UID: \"2acc6d35-5679-4fac-970f-3d2ff954cc33\") " pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:48.692924 master-0 kubenswrapper[26474]: I0223 13:14:48.692843 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"route-controller-manager-648db577cf-2sqzl\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:48.713947 master-0 kubenswrapper[26474]: I0223 13:14:48.713290 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sh26\" (UniqueName: \"kubernetes.io/projected/ce55de54-8441-4a16-8b57-598042869000-kube-api-access-6sh26\") pod \"insights-operator-59b498fcfb-sswng\" (UID: \"ce55de54-8441-4a16-8b57-598042869000\") " pod="openshift-insights/insights-operator-59b498fcfb-sswng" Feb 23 13:14:48.726653 master-0 kubenswrapper[26474]: I0223 13:14:48.726596 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8dp\" (UniqueName: \"kubernetes.io/projected/b0a29266-d968-444d-82bb-085ff1d6e506-kube-api-access-zx8dp\") pod \"prometheus-operator-754bc4d665-2ksrm\" (UID: \"b0a29266-d968-444d-82bb-085ff1d6e506\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-2ksrm" Feb 23 13:14:48.761388 master-0 kubenswrapper[26474]: I0223 13:14:48.761307 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl5r2\" (UniqueName: \"kubernetes.io/projected/922e0be5-23c2-481e-89be-e918dc4ce90c-kube-api-access-sl5r2\") pod \"apiserver-9f44475c9-drjp5\" (UID: \"922e0be5-23c2-481e-89be-e918dc4ce90c\") " pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:48.773597 master-0 kubenswrapper[26474]: I0223 13:14:48.773550 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfqh\" (UniqueName: \"kubernetes.io/projected/4b9d6485-cf67-49c5-99c1-b8582a0bab70-kube-api-access-tgfqh\") pod \"csi-snapshot-controller-operator-6fb4df594f-f5n2p\" (UID: \"4b9d6485-cf67-49c5-99c1-b8582a0bab70\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-f5n2p" Feb 23 13:14:48.796995 master-0 kubenswrapper[26474]: I0223 13:14:48.796938 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a6b0d84-a344-43e4-b9c4-c8e0670528de-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-hzbld\" (UID: \"3a6b0d84-a344-43e4-b9c4-c8e0670528de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-hzbld" Feb 23 13:14:48.817873 master-0 kubenswrapper[26474]: I0223 13:14:48.817818 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pntn4\" (UniqueName: \"kubernetes.io/projected/9bed6748-374e-4d8a-92a0-36d7d735d6b7-kube-api-access-pntn4\") pod \"cluster-monitoring-operator-6bb6d78bf-gjp8h\" (UID: \"9bed6748-374e-4d8a-92a0-36d7d735d6b7\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-gjp8h" Feb 23 13:14:48.835103 master-0 kubenswrapper[26474]: E0223 13:14:48.835047 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:48.835584 master-0 kubenswrapper[26474]: E0223 13:14:48.835564 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:48.835817 master-0 kubenswrapper[26474]: E0223 13:14:48.835798 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:49.335759935 +0000 UTC m=+11.182267632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:48.866575 master-0 kubenswrapper[26474]: I0223 13:14:48.866126 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=13.866083597 podStartE2EDuration="13.866083597s" podCreationTimestamp="2026-02-23 13:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:48.858807809 +0000 UTC m=+10.705315516" watchObservedRunningTime="2026-02-23 13:14:48.866083597 +0000 UTC m=+10.712591314" Feb 23 13:14:48.892631 master-0 kubenswrapper[26474]: E0223 13:14:48.892543 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:48.915972 master-0 kubenswrapper[26474]: E0223 13:14:48.915828 26474 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.509s" Feb 23 13:14:48.916320 master-0 kubenswrapper[26474]: I0223 13:14:48.916300 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:48.916526 master-0 kubenswrapper[26474]: I0223 13:14:48.916509 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:48.916643 master-0 kubenswrapper[26474]: I0223 13:14:48.916628 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-8wrb6" Feb 23 13:14:48.931251 master-0 kubenswrapper[26474]: I0223 13:14:48.931169 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:14:48.935507 master-0 kubenswrapper[26474]: I0223 13:14:48.935457 26474 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 23 13:14:48.935596 master-0 kubenswrapper[26474]: I0223 13:14:48.935569 26474 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 13:14:48.974125 master-0 kubenswrapper[26474]: I0223 13:14:48.974045 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974135 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f29c0801cb73a88db37a5dde38238b8a02b3aa465a16ef32b1a402a776062703"} Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974296 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974392 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974417 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"f29c0801cb73a88db37a5dde38238b8a02b3aa465a16ef32b1a402a776062703"} Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974539 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974650 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974674 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:14:48.974723 master-0 kubenswrapper[26474]: I0223 13:14:48.974697 26474 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3f1d2c44-e357-47c1-928f-bf14fa7a53e2" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974816 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974843 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974861 26474 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3f1d2c44-e357-47c1-928f-bf14fa7a53e2" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974900 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-588zk" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974922 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.974986 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:48.975035 master-0 kubenswrapper[26474]: I0223 13:14:48.975032 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:48.975239 master-0 kubenswrapper[26474]: I0223 13:14:48.975056 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:48.975239 master-0 kubenswrapper[26474]: I0223 13:14:48.975099 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:48.975239 master-0 kubenswrapper[26474]: I0223 13:14:48.975125 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:14:48.975239 master-0 kubenswrapper[26474]: I0223 13:14:48.975180 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:48.975373 master-0 kubenswrapper[26474]: I0223 13:14:48.975298 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:48.976098 master-0 kubenswrapper[26474]: I0223 13:14:48.976061 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:48.976194 master-0 kubenswrapper[26474]: I0223 13:14:48.976156 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-ql2nl" Feb 23 13:14:48.976279 master-0 kubenswrapper[26474]: I0223 13:14:48.976240 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:48.976357 master-0 kubenswrapper[26474]: I0223 13:14:48.976318 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-cqmh7" Feb 23 13:14:48.976426 master-0 kubenswrapper[26474]: I0223 13:14:48.976396 26474 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:48.976684 master-0 kubenswrapper[26474]: I0223 13:14:48.976637 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:48.976725 master-0 kubenswrapper[26474]: I0223 13:14:48.976710 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:48.976770 master-0 kubenswrapper[26474]: I0223 13:14:48.976754 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:48.976802 master-0 kubenswrapper[26474]: I0223 13:14:48.976787 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:48.976914 master-0 kubenswrapper[26474]: I0223 13:14:48.976887 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:48.976945 master-0 kubenswrapper[26474]: I0223 13:14:48.976922 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:48.976945 master-0 kubenswrapper[26474]: I0223 13:14:48.976937 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:48.977007 master-0 kubenswrapper[26474]: I0223 13:14:48.976957 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-c7hng" Feb 23 13:14:48.977007 master-0 kubenswrapper[26474]: I0223 13:14:48.976977 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-544948f94-sshjz" Feb 23 13:14:48.977007 master-0 kubenswrapper[26474]: I0223 13:14:48.977002 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:48.977098 master-0 kubenswrapper[26474]: I0223 13:14:48.977023 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:48.977098 master-0 kubenswrapper[26474]: I0223 13:14:48.977049 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:48.977098 master-0 kubenswrapper[26474]: I0223 13:14:48.977069 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:48.977098 master-0 kubenswrapper[26474]: I0223 13:14:48.977091 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:48.977240 master-0 kubenswrapper[26474]: I0223 13:14:48.977112 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:48.977240 master-0 kubenswrapper[26474]: I0223 13:14:48.977150 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:48.977240 master-0 kubenswrapper[26474]: I0223 13:14:48.977228 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ljphn" Feb 23 13:14:48.980924 master-0 kubenswrapper[26474]: I0223 13:14:48.980874 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:14:49.032035 master-0 kubenswrapper[26474]: I0223 13:14:49.031931 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:14:49.073104 master-0 kubenswrapper[26474]: I0223 13:14:49.072956 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" event={"ID":"73ba4f16-0217-4bf1-8fc2-6b385eda0771","Type":"ContainerStarted","Data":"0a0feba97a7f5fa9113f68e26a55f9fad13fe53e47c373ac8ec5a367f8a4cf9c"} Feb 23 13:14:49.076488 master-0 kubenswrapper[26474]: I0223 13:14:49.076433 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:49.092193 master-0 kubenswrapper[26474]: I0223 13:14:49.092133 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:49.120950 master-0 kubenswrapper[26474]: E0223 13:14:49.120892 26474 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:49.121853 master-0 kubenswrapper[26474]: I0223 13:14:49.121837 26474 scope.go:117] "RemoveContainer" containerID="f29c0801cb73a88db37a5dde38238b8a02b3aa465a16ef32b1a402a776062703" Feb 23 13:14:49.370941 master-0 kubenswrapper[26474]: I0223 13:14:49.370870 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:49.371137 master-0 kubenswrapper[26474]: E0223 13:14:49.371101 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:49.371177 master-0 kubenswrapper[26474]: E0223 13:14:49.371133 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:49.371208 master-0 kubenswrapper[26474]: E0223 13:14:49.371193 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:50.371172065 +0000 UTC m=+12.217679752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:49.642408 master-0 kubenswrapper[26474]: I0223 13:14:49.642173 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:49.647127 master-0 kubenswrapper[26474]: I0223 13:14:49.647069 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:49.713668 master-0 kubenswrapper[26474]: I0223 13:14:49.713583 26474 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:49.848278 master-0 kubenswrapper[26474]: I0223 13:14:49.848201 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:49.854451 master-0 kubenswrapper[26474]: I0223 13:14:49.854330 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:50.084584 master-0 kubenswrapper[26474]: I0223 13:14:50.083997 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/1.log" Feb 23 13:14:50.088308 master-0 kubenswrapper[26474]: I0223 13:14:50.087450 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d"} Feb 23 13:14:50.088308 master-0 kubenswrapper[26474]: I0223 13:14:50.088162 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:50.089698 master-0 kubenswrapper[26474]: I0223 13:14:50.089656 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:50.092199 master-0 kubenswrapper[26474]: I0223 13:14:50.092043 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-kcfgf" Feb 23 13:14:50.192451 master-0 kubenswrapper[26474]: I0223 13:14:50.192383 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:50.197512 master-0 kubenswrapper[26474]: I0223 13:14:50.197468 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:50.384670 master-0 kubenswrapper[26474]: I0223 13:14:50.384541 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:50.385108 master-0 kubenswrapper[26474]: E0223 13:14:50.385045 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:50.385193 master-0 kubenswrapper[26474]: E0223 13:14:50.385180 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:50.385379 master-0 kubenswrapper[26474]: E0223 13:14:50.385321 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:52.385297617 +0000 UTC m=+14.231805294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:51.091516 master-0 kubenswrapper[26474]: I0223 13:14:51.091266 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:51.528156 master-0 kubenswrapper[26474]: I0223 13:14:51.528065 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=3.528045924 podStartE2EDuration="3.528045924s" podCreationTimestamp="2026-02-23 13:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:51.526602259 +0000 UTC m=+13.373109956" watchObservedRunningTime="2026-02-23 13:14:51.528045924 +0000 UTC m=+13.374553601" Feb 23 13:14:51.951287 master-0 kubenswrapper[26474]: I0223 13:14:51.951193 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:51.955982 master-0 kubenswrapper[26474]: I0223 13:14:51.955963 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:52.412452 master-0 kubenswrapper[26474]: I0223 13:14:52.412321 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:52.412939 master-0 kubenswrapper[26474]: E0223 13:14:52.412520 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:52.412939 master-0 kubenswrapper[26474]: E0223 13:14:52.412568 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:52.412939 master-0 kubenswrapper[26474]: E0223 13:14:52.412637 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:14:56.412617019 +0000 UTC m=+18.259124696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:52.661665 master-0 kubenswrapper[26474]: I0223 13:14:52.661591 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:14:53.099664 master-0 kubenswrapper[26474]: I0223 13:14:53.099609 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:53.118666 master-0 kubenswrapper[26474]: I0223 13:14:53.118599 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-lqc9w" Feb 23 13:14:53.257756 master-0 kubenswrapper[26474]: I0223 13:14:53.257686 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:53.283629 master-0 kubenswrapper[26474]: I0223 13:14:53.283573 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:53.346263 master-0 kubenswrapper[26474]: I0223 13:14:53.346213 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6c65bdd8f8-vblb2" Feb 23 13:14:53.437433 master-0 kubenswrapper[26474]: I0223 13:14:53.433882 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:53.437433 master-0 kubenswrapper[26474]: I0223 13:14:53.434107 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:53.447182 master-0 kubenswrapper[26474]: I0223 13:14:53.446755 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:14:53.907719 master-0 kubenswrapper[26474]: I0223 13:14:53.907675 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9f44475c9-drjp5" Feb 23 13:14:54.112409 master-0 kubenswrapper[26474]: I0223 13:14:54.112329 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:54.112409 master-0 kubenswrapper[26474]: I0223 13:14:54.112392 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:14:54.246569 master-0 kubenswrapper[26474]: I0223 13:14:54.246327 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:54.249962 master-0 kubenswrapper[26474]: I0223 13:14:54.249893 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:14:54.804753 master-0 kubenswrapper[26474]: I0223 13:14:54.804663 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:54.834397 master-0 kubenswrapper[26474]: I0223 13:14:54.833487 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:55.088277 master-0 kubenswrapper[26474]: I0223 13:14:55.088098 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:55.118662 master-0 kubenswrapper[26474]: I0223 13:14:55.118552 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qz8dt" Feb 23 13:14:55.782806 master-0 kubenswrapper[26474]: I0223 13:14:55.782728 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:14:55.787482 master-0 kubenswrapper[26474]: I0223 13:14:55.787431 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-ld22t" Feb 23 13:14:56.280176 master-0 kubenswrapper[26474]: I0223 13:14:56.280096 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:56.283823 master-0 kubenswrapper[26474]: I0223 13:14:56.283785 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-jbhn6" Feb 23 13:14:56.498804 master-0 kubenswrapper[26474]: I0223 13:14:56.498737 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:14:56.499115 master-0 kubenswrapper[26474]: E0223 13:14:56.499043 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:56.499169 master-0 kubenswrapper[26474]: E0223 13:14:56.499120 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:56.499233 master-0 kubenswrapper[26474]: E0223 13:14:56.499205 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:15:04.499182115 +0000 UTC m=+26.345689792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:14:56.634124 master-0 kubenswrapper[26474]: I0223 13:14:56.633991 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:14:56.637085 master-0 kubenswrapper[26474]: I0223 13:14:56.637033 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-rnz52" Feb 23 13:14:58.384901 master-0 kubenswrapper[26474]: I0223 13:14:58.384834 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwhpv" Feb 23 13:14:58.697073 master-0 kubenswrapper[26474]: I0223 13:14:58.696888 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnmk2" Feb 23 13:14:58.700496 master-0 kubenswrapper[26474]: I0223 13:14:58.700460 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrtmg" Feb 23 13:14:58.953035 master-0 kubenswrapper[26474]: I0223 13:14:58.952923 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w7wq9" Feb 23 13:15:00.178287 master-0 kubenswrapper[26474]: I0223 13:15:00.178134 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl"] Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178383 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178398 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178408 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178415 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178424 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178430 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178441 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178447 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178461 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178468 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178479 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178485 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178494 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178500 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178507 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178513 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178521 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178526 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178537 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178543 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178557 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178564 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178573 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178578 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178584 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178590 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178598 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178604 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: E0223 13:15:00.178620 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178626 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178727 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0063130-dfb5-4907-a000-f023a77c6441" containerName="assisted-installer-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178747 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178758 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c1e327-cb40-4b36-b371-20d1271b8d8d" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178771 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="abccfbee-41f4-4557-b953-eb6e719aee31" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178780 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178793 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bad4fd9-074b-4a4e-8af9-50bdc4be09df" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178802 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178814 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e4636e-0cb6-492b-89b0-17ca9ff9e252" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178829 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac1ae06-bb6b-448f-b2ab-cf2adc5b3991" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178839 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3a080-c8a3-4359-9442-972bf4bb9b04" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178845 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178855 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b76471-bb9d-45a1-b3be-53e4f013e604" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178863 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="283fd2f4-771b-4592-a143-b7e3a5ed6765" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178870 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dfc05d-bd62-4c0c-aae4-5d1f44de9449" containerName="installer" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.178877 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:15:00.180001 master-0 kubenswrapper[26474]: I0223 13:15:00.179308 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.183257 master-0 kubenswrapper[26474]: I0223 13:15:00.180937 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-pnt5q" Feb 23 13:15:00.183257 master-0 kubenswrapper[26474]: I0223 13:15:00.181170 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:15:00.191141 master-0 kubenswrapper[26474]: I0223 13:15:00.191059 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl"] Feb 23 13:15:00.356633 master-0 kubenswrapper[26474]: I0223 13:15:00.356570 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.356633 master-0 kubenswrapper[26474]: I0223 13:15:00.356637 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.356930 master-0 kubenswrapper[26474]: I0223 13:15:00.356702 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc4m\" (UniqueName: \"kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.458572 master-0 kubenswrapper[26474]: I0223 13:15:00.458268 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc4m\" (UniqueName: \"kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.458930 master-0 kubenswrapper[26474]: I0223 13:15:00.458688 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.458930 master-0 kubenswrapper[26474]: I0223 13:15:00.458751 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.459498 master-0 kubenswrapper[26474]: I0223 13:15:00.459416 26474 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 13:15:00.460088 master-0 kubenswrapper[26474]: I0223 13:15:00.460016 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.462393 master-0 kubenswrapper[26474]: I0223 13:15:00.462354 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.484422 master-0 kubenswrapper[26474]: I0223 13:15:00.484185 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc4m\" (UniqueName: \"kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m\") pod \"collect-profiles-29530875-f28dl\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.499730 master-0 kubenswrapper[26474]: I0223 13:15:00.499659 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:00.946241 master-0 kubenswrapper[26474]: I0223 13:15:00.946001 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl"] Feb 23 13:15:01.168179 master-0 kubenswrapper[26474]: I0223 13:15:01.167819 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" event={"ID":"c228edf9-d832-409d-a746-7ec7dc365137","Type":"ContainerStarted","Data":"4229371bb94208186d145e8f95cbd7d7282ab2b022ffa1dbb6eac3c70d61def8"} Feb 23 13:15:01.168179 master-0 kubenswrapper[26474]: I0223 13:15:01.167943 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" event={"ID":"c228edf9-d832-409d-a746-7ec7dc365137","Type":"ContainerStarted","Data":"adfcc4977b366334f93ea94c877e829d0406dd331632c1dd8c8031acd2d82e67"} Feb 23 13:15:01.194199 master-0 kubenswrapper[26474]: I0223 13:15:01.194073 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" podStartSLOduration=1.194054161 podStartE2EDuration="1.194054161s" podCreationTimestamp="2026-02-23 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:01.193976269 +0000 UTC m=+23.040483956" watchObservedRunningTime="2026-02-23 13:15:01.194054161 +0000 UTC m=+23.040561838" Feb 23 13:15:02.177505 master-0 kubenswrapper[26474]: I0223 13:15:02.177436 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/3.log" Feb 23 13:15:02.178310 master-0 kubenswrapper[26474]: I0223 13:15:02.178257 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/2.log" Feb 23 13:15:02.178854 master-0 kubenswrapper[26474]: I0223 13:15:02.178806 26474 generic.go:334] "Generic (PLEG): container finished" podID="878aa813-a8b9-4a6f-8086-778df276d0d7" containerID="c83498da67d9371893a003fc0cf39cf6626ee5bbdf6f92277274b5695bb058d4" exitCode=1 Feb 23 13:15:02.178984 master-0 kubenswrapper[26474]: I0223 13:15:02.178892 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerDied","Data":"c83498da67d9371893a003fc0cf39cf6626ee5bbdf6f92277274b5695bb058d4"} Feb 23 13:15:02.179090 master-0 kubenswrapper[26474]: I0223 13:15:02.179051 26474 scope.go:117] "RemoveContainer" containerID="3e7dbf4208abe5c9d935ae2680f6b0cac93b049b64aaa57ef376ac31460e3774" Feb 23 13:15:02.179982 master-0 kubenswrapper[26474]: I0223 13:15:02.179941 26474 scope.go:117] "RemoveContainer" containerID="c83498da67d9371893a003fc0cf39cf6626ee5bbdf6f92277274b5695bb058d4" Feb 23 13:15:02.668211 master-0 kubenswrapper[26474]: I0223 13:15:02.668132 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:15:02.745221 master-0 kubenswrapper[26474]: I0223 13:15:02.743272 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:15:03.188739 master-0 kubenswrapper[26474]: I0223 13:15:03.188671 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/3.log" Feb 23 13:15:03.189152 master-0 kubenswrapper[26474]: I0223 13:15:03.189110 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-k9h69" event={"ID":"878aa813-a8b9-4a6f-8086-778df276d0d7","Type":"ContainerStarted","Data":"b6a8b82a2ca2c267a7cf7e0185f63ddef5891eb1000ca83347e9d6afb83d99fa"} Feb 23 13:15:03.372293 master-0 kubenswrapper[26474]: I0223 13:15:03.372196 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t9cng"] Feb 23 13:15:03.373088 master-0 kubenswrapper[26474]: I0223 13:15:03.373061 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.379487 master-0 kubenswrapper[26474]: I0223 13:15:03.379458 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:15:03.379832 master-0 kubenswrapper[26474]: I0223 13:15:03.379811 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:15:03.379949 master-0 kubenswrapper[26474]: I0223 13:15:03.379552 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:15:03.398914 master-0 kubenswrapper[26474]: I0223 13:15:03.398143 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t9cng"] Feb 23 13:15:03.529668 master-0 kubenswrapper[26474]: I0223 13:15:03.529606 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdl7\" (UniqueName: \"kubernetes.io/projected/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-kube-api-access-vqdl7\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.529753 master-0 kubenswrapper[26474]: I0223 13:15:03.529676 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-cert\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.631040 master-0 kubenswrapper[26474]: I0223 13:15:03.630932 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-cert\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.631393 master-0 kubenswrapper[26474]: I0223 13:15:03.631187 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqdl7\" (UniqueName: \"kubernetes.io/projected/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-kube-api-access-vqdl7\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.634451 master-0 kubenswrapper[26474]: I0223 13:15:03.634403 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-cert\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.648150 master-0 kubenswrapper[26474]: I0223 13:15:03.648097 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqdl7\" (UniqueName: \"kubernetes.io/projected/f5488d6b-d85c-4a31-a34e-ae5c41b95d18-kube-api-access-vqdl7\") pod \"ingress-canary-t9cng\" (UID: \"f5488d6b-d85c-4a31-a34e-ae5c41b95d18\") " pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:03.693366 master-0 kubenswrapper[26474]: I0223 13:15:03.693212 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t9cng" Feb 23 13:15:04.103474 master-0 kubenswrapper[26474]: I0223 13:15:04.103399 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t9cng"] Feb 23 13:15:04.200924 master-0 kubenswrapper[26474]: I0223 13:15:04.200824 26474 generic.go:334] "Generic (PLEG): container finished" podID="c228edf9-d832-409d-a746-7ec7dc365137" containerID="4229371bb94208186d145e8f95cbd7d7282ab2b022ffa1dbb6eac3c70d61def8" exitCode=0 Feb 23 13:15:04.201141 master-0 kubenswrapper[26474]: I0223 13:15:04.200935 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" event={"ID":"c228edf9-d832-409d-a746-7ec7dc365137","Type":"ContainerDied","Data":"4229371bb94208186d145e8f95cbd7d7282ab2b022ffa1dbb6eac3c70d61def8"} Feb 23 13:15:04.202792 master-0 kubenswrapper[26474]: I0223 13:15:04.202758 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t9cng" event={"ID":"f5488d6b-d85c-4a31-a34e-ae5c41b95d18","Type":"ContainerStarted","Data":"9ba3da145dae05f9527889c22f40a7bd4a81a1742d2dc445c3f45c47148db3a6"} Feb 23 13:15:04.552073 master-0 kubenswrapper[26474]: I0223 13:15:04.551979 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:15:04.552572 master-0 kubenswrapper[26474]: E0223 13:15:04.552191 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:04.552572 master-0 kubenswrapper[26474]: E0223 13:15:04.552235 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:04.552572 master-0 kubenswrapper[26474]: E0223 13:15:04.552318 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:15:20.55229274 +0000 UTC m=+42.398800457 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:05.212905 master-0 kubenswrapper[26474]: I0223 13:15:05.212770 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t9cng" event={"ID":"f5488d6b-d85c-4a31-a34e-ae5c41b95d18","Type":"ContainerStarted","Data":"c7b7b9012d5a4147fb6d036dc6df28a95c7c53cfac4a68487344f66a8e8189a5"} Feb 23 13:15:05.240618 master-0 kubenswrapper[26474]: I0223 13:15:05.240016 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t9cng" podStartSLOduration=2.239974592 podStartE2EDuration="2.239974592s" podCreationTimestamp="2026-02-23 13:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:05.236411125 +0000 UTC m=+27.082918802" watchObservedRunningTime="2026-02-23 13:15:05.239974592 +0000 UTC m=+27.086482279" Feb 23 13:15:05.555111 master-0 kubenswrapper[26474]: I0223 13:15:05.555042 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:05.693605 master-0 kubenswrapper[26474]: I0223 13:15:05.693513 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume\") pod \"c228edf9-d832-409d-a746-7ec7dc365137\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " Feb 23 13:15:05.693605 master-0 kubenswrapper[26474]: I0223 13:15:05.693605 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume\") pod \"c228edf9-d832-409d-a746-7ec7dc365137\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " Feb 23 13:15:05.694252 master-0 kubenswrapper[26474]: I0223 13:15:05.693653 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc4m\" (UniqueName: \"kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m\") pod \"c228edf9-d832-409d-a746-7ec7dc365137\" (UID: \"c228edf9-d832-409d-a746-7ec7dc365137\") " Feb 23 13:15:05.694390 master-0 kubenswrapper[26474]: I0223 13:15:05.694282 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume" (OuterVolumeSpecName: "config-volume") pod "c228edf9-d832-409d-a746-7ec7dc365137" (UID: "c228edf9-d832-409d-a746-7ec7dc365137"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:05.697387 master-0 kubenswrapper[26474]: I0223 13:15:05.697221 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c228edf9-d832-409d-a746-7ec7dc365137" (UID: "c228edf9-d832-409d-a746-7ec7dc365137"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:05.699013 master-0 kubenswrapper[26474]: I0223 13:15:05.698978 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m" (OuterVolumeSpecName: "kube-api-access-rhc4m") pod "c228edf9-d832-409d-a746-7ec7dc365137" (UID: "c228edf9-d832-409d-a746-7ec7dc365137"). InnerVolumeSpecName "kube-api-access-rhc4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:05.795380 master-0 kubenswrapper[26474]: I0223 13:15:05.795119 26474 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c228edf9-d832-409d-a746-7ec7dc365137-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:05.795380 master-0 kubenswrapper[26474]: I0223 13:15:05.795184 26474 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c228edf9-d832-409d-a746-7ec7dc365137-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:05.795380 master-0 kubenswrapper[26474]: I0223 13:15:05.795209 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhc4m\" (UniqueName: \"kubernetes.io/projected/c228edf9-d832-409d-a746-7ec7dc365137-kube-api-access-rhc4m\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:06.224172 master-0 kubenswrapper[26474]: I0223 13:15:06.223993 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" Feb 23 13:15:06.224172 master-0 kubenswrapper[26474]: I0223 13:15:06.223981 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-f28dl" event={"ID":"c228edf9-d832-409d-a746-7ec7dc365137","Type":"ContainerDied","Data":"adfcc4977b366334f93ea94c877e829d0406dd331632c1dd8c8031acd2d82e67"} Feb 23 13:15:06.224172 master-0 kubenswrapper[26474]: I0223 13:15:06.224065 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfcc4977b366334f93ea94c877e829d0406dd331632c1dd8c8031acd2d82e67" Feb 23 13:15:06.785027 master-0 kubenswrapper[26474]: I0223 13:15:06.784960 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:15:10.490426 master-0 kubenswrapper[26474]: I0223 13:15:10.490304 26474 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:15:10.491240 master-0 kubenswrapper[26474]: I0223 13:15:10.490750 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" containerID="cri-o://b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71" gracePeriod=5 Feb 23 13:15:15.485611 master-0 kubenswrapper[26474]: I0223 13:15:15.485552 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-vn4fp"] Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: E0223 13:15:15.485845 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: I0223 13:15:15.485860 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: E0223 13:15:15.485896 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c228edf9-d832-409d-a746-7ec7dc365137" containerName="collect-profiles" Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: I0223 13:15:15.485904 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c228edf9-d832-409d-a746-7ec7dc365137" containerName="collect-profiles" Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: I0223 13:15:15.486055 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 23 13:15:15.486165 master-0 kubenswrapper[26474]: I0223 13:15:15.486084 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c228edf9-d832-409d-a746-7ec7dc365137" containerName="collect-profiles" Feb 23 13:15:15.486623 master-0 kubenswrapper[26474]: I0223 13:15:15.486600 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.492042 master-0 kubenswrapper[26474]: I0223 13:15:15.492011 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:15:15.492277 master-0 kubenswrapper[26474]: I0223 13:15:15.492238 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-p9nh5" Feb 23 13:15:15.492443 master-0 kubenswrapper[26474]: I0223 13:15:15.492421 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:15:15.496280 master-0 kubenswrapper[26474]: I0223 13:15:15.496257 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:15:15.497290 master-0 kubenswrapper[26474]: I0223 13:15:15.497276 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:15:15.504907 master-0 kubenswrapper[26474]: I0223 13:15:15.504869 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:15:15.509838 master-0 kubenswrapper[26474]: I0223 13:15:15.509789 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-vn4fp"] Feb 23 13:15:15.653431 master-0 kubenswrapper[26474]: I0223 13:15:15.646789 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 23 13:15:15.653431 master-0 kubenswrapper[26474]: I0223 13:15:15.646874 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:15:15.672649 master-0 kubenswrapper[26474]: I0223 13:15:15.672555 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-config\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.672781 master-0 kubenswrapper[26474]: I0223 13:15:15.672738 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-trusted-ca\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.672838 master-0 kubenswrapper[26474]: I0223 13:15:15.672810 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69b3a8a2-8b91-4115-9d4c-67fc97b36811-serving-cert\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.672990 master-0 kubenswrapper[26474]: I0223 13:15:15.672946 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hprg\" (UniqueName: \"kubernetes.io/projected/69b3a8a2-8b91-4115-9d4c-67fc97b36811-kube-api-access-2hprg\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.774463 master-0 kubenswrapper[26474]: I0223 13:15:15.774387 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 23 13:15:15.774463 master-0 kubenswrapper[26474]: I0223 13:15:15.774461 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 23 13:15:15.774754 master-0 kubenswrapper[26474]: I0223 13:15:15.774481 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 23 13:15:15.774754 master-0 kubenswrapper[26474]: I0223 13:15:15.774496 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 23 13:15:15.774754 master-0 kubenswrapper[26474]: I0223 13:15:15.774594 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 23 13:15:15.774754 master-0 kubenswrapper[26474]: I0223 13:15:15.774733 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-trusted-ca\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.774913 master-0 kubenswrapper[26474]: I0223 13:15:15.774769 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69b3a8a2-8b91-4115-9d4c-67fc97b36811-serving-cert\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.774913 master-0 kubenswrapper[26474]: I0223 13:15:15.774792 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hprg\" (UniqueName: \"kubernetes.io/projected/69b3a8a2-8b91-4115-9d4c-67fc97b36811-kube-api-access-2hprg\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.774913 master-0 kubenswrapper[26474]: I0223 13:15:15.774846 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-config\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.775437 master-0 kubenswrapper[26474]: I0223 13:15:15.775316 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests" (OuterVolumeSpecName: "manifests") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:15:15.775523 master-0 kubenswrapper[26474]: I0223 13:15:15.775492 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:15:15.775690 master-0 kubenswrapper[26474]: I0223 13:15:15.775661 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-config\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.775739 master-0 kubenswrapper[26474]: I0223 13:15:15.775720 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:15:15.777139 master-0 kubenswrapper[26474]: I0223 13:15:15.777112 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b3a8a2-8b91-4115-9d4c-67fc97b36811-trusted-ca\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.777690 master-0 kubenswrapper[26474]: I0223 13:15:15.777626 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log" (OuterVolumeSpecName: "var-log") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:15:15.780818 master-0 kubenswrapper[26474]: I0223 13:15:15.780788 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69b3a8a2-8b91-4115-9d4c-67fc97b36811-serving-cert\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.781866 master-0 kubenswrapper[26474]: I0223 13:15:15.781809 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:15:15.816014 master-0 kubenswrapper[26474]: I0223 13:15:15.815909 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hprg\" (UniqueName: \"kubernetes.io/projected/69b3a8a2-8b91-4115-9d4c-67fc97b36811-kube-api-access-2hprg\") pod \"console-operator-5df5ffc47c-vn4fp\" (UID: \"69b3a8a2-8b91-4115-9d4c-67fc97b36811\") " pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:15.875658 master-0 kubenswrapper[26474]: I0223 13:15:15.875592 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:15.875658 master-0 kubenswrapper[26474]: I0223 13:15:15.875636 26474 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:15.875658 master-0 kubenswrapper[26474]: I0223 13:15:15.875648 26474 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:15.875658 master-0 kubenswrapper[26474]: I0223 13:15:15.875656 26474 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:15.875658 master-0 kubenswrapper[26474]: I0223 13:15:15.875664 26474 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:16.104713 master-0 kubenswrapper[26474]: I0223 13:15:16.104544 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:16.312365 master-0 kubenswrapper[26474]: I0223 13:15:16.312295 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 23 13:15:16.312575 master-0 kubenswrapper[26474]: I0223 13:15:16.312398 26474 generic.go:334] "Generic (PLEG): container finished" podID="5c4f5d60772fa42f26e9c219bffa62b9" containerID="b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71" exitCode=137 Feb 23 13:15:16.312575 master-0 kubenswrapper[26474]: I0223 13:15:16.312461 26474 scope.go:117] "RemoveContainer" containerID="b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71" Feb 23 13:15:16.312575 master-0 kubenswrapper[26474]: I0223 13:15:16.312473 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:15:16.337662 master-0 kubenswrapper[26474]: I0223 13:15:16.337481 26474 scope.go:117] "RemoveContainer" containerID="b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71" Feb 23 13:15:16.338643 master-0 kubenswrapper[26474]: E0223 13:15:16.338598 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71\": container with ID starting with b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71 not found: ID does not exist" containerID="b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71" Feb 23 13:15:16.338693 master-0 kubenswrapper[26474]: I0223 13:15:16.338648 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71"} err="failed to get container status \"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71\": rpc error: code = NotFound desc = could not find container \"b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71\": container with ID starting with b51743371408caa9caf442e3684eb7987bf16b099d5694115d945fa58b29de71 not found: ID does not exist" Feb 23 13:15:16.382262 master-0 kubenswrapper[26474]: I0223 13:15:16.382012 26474 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="3d1f571b-ffd0-4bc4-9b47-e1a7b22281e4" Feb 23 13:15:16.406318 master-0 kubenswrapper[26474]: I0223 13:15:16.406253 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f5d60772fa42f26e9c219bffa62b9" path="/var/lib/kubelet/pods/5c4f5d60772fa42f26e9c219bffa62b9/volumes" Feb 23 13:15:16.406581 master-0 kubenswrapper[26474]: I0223 13:15:16.406562 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 23 13:15:16.421617 master-0 kubenswrapper[26474]: I0223 13:15:16.421560 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:15:16.421617 master-0 kubenswrapper[26474]: I0223 13:15:16.421618 26474 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="3d1f571b-ffd0-4bc4-9b47-e1a7b22281e4" Feb 23 13:15:16.424395 master-0 kubenswrapper[26474]: I0223 13:15:16.424315 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:15:16.424478 master-0 kubenswrapper[26474]: I0223 13:15:16.424400 26474 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="3d1f571b-ffd0-4bc4-9b47-e1a7b22281e4" Feb 23 13:15:16.589588 master-0 kubenswrapper[26474]: I0223 13:15:16.589533 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-vn4fp"] Feb 23 13:15:16.594985 master-0 kubenswrapper[26474]: W0223 13:15:16.594731 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69b3a8a2_8b91_4115_9d4c_67fc97b36811.slice/crio-5652378209ba8fff869227584b33535d8b99d5ccc9ba86654a5bcfce11b29bb6 WatchSource:0}: Error finding container 5652378209ba8fff869227584b33535d8b99d5ccc9ba86654a5bcfce11b29bb6: Status 404 returned error can't find the container with id 5652378209ba8fff869227584b33535d8b99d5ccc9ba86654a5bcfce11b29bb6 Feb 23 13:15:16.599578 master-0 kubenswrapper[26474]: I0223 13:15:16.598905 26474 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:15:16.736387 master-0 kubenswrapper[26474]: I0223 13:15:16.736151 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:15:16.737033 master-0 kubenswrapper[26474]: I0223 13:15:16.736990 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.741081 master-0 kubenswrapper[26474]: I0223 13:15:16.741007 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:15:16.741279 master-0 kubenswrapper[26474]: I0223 13:15:16.741231 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:15:16.741608 master-0 kubenswrapper[26474]: I0223 13:15:16.741561 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:15:16.741741 master-0 kubenswrapper[26474]: I0223 13:15:16.741648 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:15:16.742658 master-0 kubenswrapper[26474]: I0223 13:15:16.742577 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:15:16.743849 master-0 kubenswrapper[26474]: I0223 13:15:16.743789 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:15:16.747220 master-0 kubenswrapper[26474]: I0223 13:15:16.747154 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:15:16.747426 master-0 kubenswrapper[26474]: I0223 13:15:16.747400 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nlmwg" Feb 23 13:15:16.747597 master-0 kubenswrapper[26474]: I0223 13:15:16.747545 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:15:16.747814 master-0 kubenswrapper[26474]: I0223 13:15:16.747768 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:15:16.747955 master-0 kubenswrapper[26474]: I0223 13:15:16.747866 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:15:16.748176 master-0 kubenswrapper[26474]: I0223 13:15:16.748122 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:15:16.768148 master-0 kubenswrapper[26474]: I0223 13:15:16.767288 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:15:16.768148 master-0 kubenswrapper[26474]: I0223 13:15:16.767798 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:15:16.770768 master-0 kubenswrapper[26474]: I0223 13:15:16.769438 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:15:16.895619 master-0 kubenswrapper[26474]: I0223 13:15:16.895512 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.895962 master-0 kubenswrapper[26474]: I0223 13:15:16.895727 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.895962 master-0 kubenswrapper[26474]: I0223 13:15:16.895862 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.895962 master-0 kubenswrapper[26474]: I0223 13:15:16.895891 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.895962 master-0 kubenswrapper[26474]: I0223 13:15:16.895928 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.895962 master-0 kubenswrapper[26474]: I0223 13:15:16.895948 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgzh\" (UniqueName: \"kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896189 master-0 kubenswrapper[26474]: I0223 13:15:16.896031 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896189 master-0 kubenswrapper[26474]: I0223 13:15:16.896071 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896189 master-0 kubenswrapper[26474]: I0223 13:15:16.896097 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896189 master-0 kubenswrapper[26474]: I0223 13:15:16.896130 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896434 master-0 kubenswrapper[26474]: I0223 13:15:16.896302 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896434 master-0 kubenswrapper[26474]: I0223 13:15:16.896374 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.896523 master-0 kubenswrapper[26474]: I0223 13:15:16.896506 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.998765 master-0 kubenswrapper[26474]: I0223 13:15:16.998543 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.998765 master-0 kubenswrapper[26474]: I0223 13:15:16.998659 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.998765 master-0 kubenswrapper[26474]: I0223 13:15:16.998701 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.998765 master-0 kubenswrapper[26474]: I0223 13:15:16.998730 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.999224 master-0 kubenswrapper[26474]: I0223 13:15:16.998751 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.999224 master-0 kubenswrapper[26474]: E0223 13:15:16.998835 26474 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:16.999224 master-0 kubenswrapper[26474]: E0223 13:15:16.998914 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:17.498890425 +0000 UTC m=+39.345398102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:16.999224 master-0 kubenswrapper[26474]: I0223 13:15:16.999057 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:16.999425 master-0 kubenswrapper[26474]: I0223 13:15:16.999368 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999455 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999509 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgzh\" (UniqueName: \"kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999578 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999630 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999694 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999747 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:16.999806 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.000147 master-0 kubenswrapper[26474]: I0223 13:15:17.000008 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.001899 master-0 kubenswrapper[26474]: I0223 13:15:17.001283 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.001899 master-0 kubenswrapper[26474]: E0223 13:15:17.001525 26474 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 23 13:15:17.001899 master-0 kubenswrapper[26474]: E0223 13:15:17.001626 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:17.501586591 +0000 UTC m=+39.348094308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : secret "v4-0-config-system-session" not found Feb 23 13:15:17.002441 master-0 kubenswrapper[26474]: I0223 13:15:17.002296 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.002856 master-0 kubenswrapper[26474]: I0223 13:15:17.002665 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.011008 master-0 kubenswrapper[26474]: I0223 13:15:17.008936 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.011008 master-0 kubenswrapper[26474]: I0223 13:15:17.009620 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.015053 master-0 kubenswrapper[26474]: I0223 13:15:17.014983 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.021435 master-0 kubenswrapper[26474]: I0223 13:15:17.019628 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.023308 master-0 kubenswrapper[26474]: I0223 13:15:17.023048 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgzh\" (UniqueName: \"kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.023992 master-0 kubenswrapper[26474]: I0223 13:15:17.023925 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.328724 master-0 kubenswrapper[26474]: I0223 13:15:17.328647 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" event={"ID":"69b3a8a2-8b91-4115-9d4c-67fc97b36811","Type":"ContainerStarted","Data":"5652378209ba8fff869227584b33535d8b99d5ccc9ba86654a5bcfce11b29bb6"} Feb 23 13:15:17.507948 master-0 kubenswrapper[26474]: I0223 13:15:17.507862 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.508274 master-0 kubenswrapper[26474]: I0223 13:15:17.507989 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:17.508274 master-0 kubenswrapper[26474]: E0223 13:15:17.508082 26474 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:17.508274 master-0 kubenswrapper[26474]: E0223 13:15:17.508132 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:18.508117374 +0000 UTC m=+40.354625051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:17.508598 master-0 kubenswrapper[26474]: E0223 13:15:17.508486 26474 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 23 13:15:17.508598 master-0 kubenswrapper[26474]: E0223 13:15:17.508514 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:18.508506784 +0000 UTC m=+40.355014461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : secret "v4-0-config-system-session" not found Feb 23 13:15:18.524776 master-0 kubenswrapper[26474]: I0223 13:15:18.524720 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:18.525238 master-0 kubenswrapper[26474]: I0223 13:15:18.524784 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:18.525238 master-0 kubenswrapper[26474]: E0223 13:15:18.524870 26474 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:18.525238 master-0 kubenswrapper[26474]: E0223 13:15:18.524951 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:20.524933469 +0000 UTC m=+42.371441146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:18.539703 master-0 kubenswrapper[26474]: I0223 13:15:18.539634 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:19.352747 master-0 kubenswrapper[26474]: I0223 13:15:19.352684 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" event={"ID":"69b3a8a2-8b91-4115-9d4c-67fc97b36811","Type":"ContainerStarted","Data":"5cd0962fdf0069e43015b2bd0f083f2517d0f5623df426e6944290c7c6b076f2"} Feb 23 13:15:19.353648 master-0 kubenswrapper[26474]: I0223 13:15:19.353589 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:19.521142 master-0 kubenswrapper[26474]: I0223 13:15:19.520675 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" Feb 23 13:15:19.540684 master-0 kubenswrapper[26474]: I0223 13:15:19.540565 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-vn4fp" podStartSLOduration=2.139340121 podStartE2EDuration="4.540531492s" podCreationTimestamp="2026-02-23 13:15:15 +0000 UTC" firstStartedPulling="2026-02-23 13:15:16.598772193 +0000 UTC m=+38.445279910" lastFinishedPulling="2026-02-23 13:15:18.999963604 +0000 UTC m=+40.846471281" observedRunningTime="2026-02-23 13:15:19.379791098 +0000 UTC m=+41.226298785" watchObservedRunningTime="2026-02-23 13:15:19.540531492 +0000 UTC m=+41.387039179" Feb 23 13:15:19.715876 master-0 kubenswrapper[26474]: I0223 13:15:19.715719 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-grk54"] Feb 23 13:15:19.716859 master-0 kubenswrapper[26474]: I0223 13:15:19.716818 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:19.721330 master-0 kubenswrapper[26474]: I0223 13:15:19.721278 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 13:15:19.721507 master-0 kubenswrapper[26474]: I0223 13:15:19.721389 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 13:15:19.721507 master-0 kubenswrapper[26474]: I0223 13:15:19.721396 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-z5rh8" Feb 23 13:15:19.738725 master-0 kubenswrapper[26474]: I0223 13:15:19.738670 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-grk54"] Feb 23 13:15:19.853526 master-0 kubenswrapper[26474]: I0223 13:15:19.853453 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxcgp\" (UniqueName: \"kubernetes.io/projected/449e8cbf-8db6-4709-b92f-a42410095ed2-kube-api-access-wxcgp\") pod \"downloads-955b69498-grk54\" (UID: \"449e8cbf-8db6-4709-b92f-a42410095ed2\") " pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:19.900482 master-0 kubenswrapper[26474]: I0223 13:15:19.900393 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk"] Feb 23 13:15:19.903124 master-0 kubenswrapper[26474]: I0223 13:15:19.903023 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:19.904958 master-0 kubenswrapper[26474]: I0223 13:15:19.904905 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 23 13:15:19.905512 master-0 kubenswrapper[26474]: I0223 13:15:19.905486 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-4zf94" Feb 23 13:15:19.910574 master-0 kubenswrapper[26474]: I0223 13:15:19.910517 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk"] Feb 23 13:15:19.954421 master-0 kubenswrapper[26474]: I0223 13:15:19.954362 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxcgp\" (UniqueName: \"kubernetes.io/projected/449e8cbf-8db6-4709-b92f-a42410095ed2-kube-api-access-wxcgp\") pod \"downloads-955b69498-grk54\" (UID: \"449e8cbf-8db6-4709-b92f-a42410095ed2\") " pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:19.969715 master-0 kubenswrapper[26474]: I0223 13:15:19.969618 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxcgp\" (UniqueName: \"kubernetes.io/projected/449e8cbf-8db6-4709-b92f-a42410095ed2-kube-api-access-wxcgp\") pod \"downloads-955b69498-grk54\" (UID: \"449e8cbf-8db6-4709-b92f-a42410095ed2\") " pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:20.040595 master-0 kubenswrapper[26474]: I0223 13:15:20.040514 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:20.055315 master-0 kubenswrapper[26474]: I0223 13:15:20.055253 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/675c2eae-ac56-4577-a599-884489d744af-monitoring-plugin-cert\") pod \"monitoring-plugin-68cc5f9c9b-5htrk\" (UID: \"675c2eae-ac56-4577-a599-884489d744af\") " pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:20.156574 master-0 kubenswrapper[26474]: I0223 13:15:20.156488 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/675c2eae-ac56-4577-a599-884489d744af-monitoring-plugin-cert\") pod \"monitoring-plugin-68cc5f9c9b-5htrk\" (UID: \"675c2eae-ac56-4577-a599-884489d744af\") " pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:20.163967 master-0 kubenswrapper[26474]: I0223 13:15:20.163909 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/675c2eae-ac56-4577-a599-884489d744af-monitoring-plugin-cert\") pod \"monitoring-plugin-68cc5f9c9b-5htrk\" (UID: \"675c2eae-ac56-4577-a599-884489d744af\") " pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:20.223927 master-0 kubenswrapper[26474]: I0223 13:15:20.223735 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:20.478679 master-0 kubenswrapper[26474]: I0223 13:15:20.478316 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-grk54"] Feb 23 13:15:20.486120 master-0 kubenswrapper[26474]: W0223 13:15:20.486033 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod449e8cbf_8db6_4709_b92f_a42410095ed2.slice/crio-4143c04ee0e98f62094ac181049fb53f3bb03e67233fdb3b966153affd6e524d WatchSource:0}: Error finding container 4143c04ee0e98f62094ac181049fb53f3bb03e67233fdb3b966153affd6e524d: Status 404 returned error can't find the container with id 4143c04ee0e98f62094ac181049fb53f3bb03e67233fdb3b966153affd6e524d Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: I0223 13:15:20.564120 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: E0223 13:15:20.564262 26474 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: E0223 13:15:20.564367 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig podName:b7b6a893-4c46-4b86-b233-192e81966dec nodeName:}" failed. No retries permitted until 2026-02-23 13:15:24.564331826 +0000 UTC m=+46.410839493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig") pod "oauth-openshift-66d7b67b49-g8lgn" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: I0223 13:15:20.564471 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: E0223 13:15:20.564828 26474 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: E0223 13:15:20.564847 26474 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:20.569030 master-0 kubenswrapper[26474]: E0223 13:15:20.564870 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access podName:27c1e327-cb40-4b36-b371-20d1271b8d8d nodeName:}" failed. No retries permitted until 2026-02-23 13:15:52.564862799 +0000 UTC m=+74.411370476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access") pod "installer-3-master-0" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 23 13:15:20.727459 master-0 kubenswrapper[26474]: I0223 13:15:20.727169 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk"] Feb 23 13:15:20.740192 master-0 kubenswrapper[26474]: W0223 13:15:20.740125 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod675c2eae_ac56_4577_a599_884489d744af.slice/crio-17be26615c1e5c40c0ec4d76327fe21e955c765d40ecbe4e6a24ffea1a6ffa9e WatchSource:0}: Error finding container 17be26615c1e5c40c0ec4d76327fe21e955c765d40ecbe4e6a24ffea1a6ffa9e: Status 404 returned error can't find the container with id 17be26615c1e5c40c0ec4d76327fe21e955c765d40ecbe4e6a24ffea1a6ffa9e Feb 23 13:15:21.376396 master-0 kubenswrapper[26474]: I0223 13:15:21.376267 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" event={"ID":"675c2eae-ac56-4577-a599-884489d744af","Type":"ContainerStarted","Data":"17be26615c1e5c40c0ec4d76327fe21e955c765d40ecbe4e6a24ffea1a6ffa9e"} Feb 23 13:15:21.378887 master-0 kubenswrapper[26474]: I0223 13:15:21.378840 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-grk54" event={"ID":"449e8cbf-8db6-4709-b92f-a42410095ed2","Type":"ContainerStarted","Data":"4143c04ee0e98f62094ac181049fb53f3bb03e67233fdb3b966153affd6e524d"} Feb 23 13:15:22.605667 master-0 kubenswrapper[26474]: I0223 13:15:22.605593 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:15:22.606452 master-0 kubenswrapper[26474]: I0223 13:15:22.606410 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.612852 master-0 kubenswrapper[26474]: I0223 13:15:22.612808 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 13:15:22.612951 master-0 kubenswrapper[26474]: I0223 13:15:22.612869 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8rwdc" Feb 23 13:15:22.613071 master-0 kubenswrapper[26474]: I0223 13:15:22.613045 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 13:15:22.613109 master-0 kubenswrapper[26474]: I0223 13:15:22.613075 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 13:15:22.613147 master-0 kubenswrapper[26474]: I0223 13:15:22.612820 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 13:15:22.613260 master-0 kubenswrapper[26474]: I0223 13:15:22.613234 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 13:15:22.633375 master-0 kubenswrapper[26474]: I0223 13:15:22.629199 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:15:22.733591 master-0 kubenswrapper[26474]: I0223 13:15:22.733528 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.733591 master-0 kubenswrapper[26474]: I0223 13:15:22.733591 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.733841 master-0 kubenswrapper[26474]: I0223 13:15:22.733615 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8hq\" (UniqueName: \"kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.733841 master-0 kubenswrapper[26474]: I0223 13:15:22.733667 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.733841 master-0 kubenswrapper[26474]: I0223 13:15:22.733743 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.733841 master-0 kubenswrapper[26474]: I0223 13:15:22.733769 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.760530 master-0 kubenswrapper[26474]: I0223 13:15:22.757747 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:15:22.765982 master-0 kubenswrapper[26474]: I0223 13:15:22.765920 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834572 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834624 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834696 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834713 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834728 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw8hq\" (UniqueName: \"kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.835791 master-0 kubenswrapper[26474]: I0223 13:15:22.834762 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.836314 master-0 kubenswrapper[26474]: I0223 13:15:22.835966 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.837421 master-0 kubenswrapper[26474]: I0223 13:15:22.836516 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.837717 master-0 kubenswrapper[26474]: I0223 13:15:22.837554 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.841608 master-0 kubenswrapper[26474]: I0223 13:15:22.841572 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.863928 master-0 kubenswrapper[26474]: I0223 13:15:22.863787 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.870219 master-0 kubenswrapper[26474]: I0223 13:15:22.869324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw8hq\" (UniqueName: \"kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq\") pod \"console-57dc5b68f6-vsffj\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:22.971059 master-0 kubenswrapper[26474]: I0223 13:15:22.970960 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:23.403973 master-0 kubenswrapper[26474]: I0223 13:15:23.403635 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" event={"ID":"675c2eae-ac56-4577-a599-884489d744af","Type":"ContainerStarted","Data":"c05da612f0d462bbc5062e5bbc7bb60e23d32e957f0649ca139a29302cab823a"} Feb 23 13:15:23.429703 master-0 kubenswrapper[26474]: I0223 13:15:23.428843 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:15:23.433324 master-0 kubenswrapper[26474]: I0223 13:15:23.433220 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" podStartSLOduration=2.5167204119999997 podStartE2EDuration="4.433201381s" podCreationTimestamp="2026-02-23 13:15:19 +0000 UTC" firstStartedPulling="2026-02-23 13:15:20.742998659 +0000 UTC m=+42.589506326" lastFinishedPulling="2026-02-23 13:15:22.659479618 +0000 UTC m=+44.505987295" observedRunningTime="2026-02-23 13:15:23.428594849 +0000 UTC m=+45.275102526" watchObservedRunningTime="2026-02-23 13:15:23.433201381 +0000 UTC m=+45.279709058" Feb 23 13:15:24.412501 master-0 kubenswrapper[26474]: I0223 13:15:24.412292 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dc5b68f6-vsffj" event={"ID":"68704d04-761c-464f-873e-657fb05b35f5","Type":"ContainerStarted","Data":"960409c6996094dcc4442cd7371eae66678c819a87eb052b88d899ad667d6894"} Feb 23 13:15:24.412501 master-0 kubenswrapper[26474]: I0223 13:15:24.412443 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:24.423155 master-0 kubenswrapper[26474]: I0223 13:15:24.423086 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-68cc5f9c9b-5htrk" Feb 23 13:15:24.571447 master-0 kubenswrapper[26474]: I0223 13:15:24.570738 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:24.571675 master-0 kubenswrapper[26474]: I0223 13:15:24.571402 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66d7b67b49-g8lgn\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:24.865277 master-0 kubenswrapper[26474]: I0223 13:15:24.864696 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:25.060117 master-0 kubenswrapper[26474]: I0223 13:15:25.060030 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:15:25.062443 master-0 kubenswrapper[26474]: I0223 13:15:25.062412 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:15:25.062600 master-0 kubenswrapper[26474]: I0223 13:15:25.062572 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.076317 master-0 kubenswrapper[26474]: I0223 13:15:25.076230 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 13:15:25.187191 master-0 kubenswrapper[26474]: I0223 13:15:25.187120 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187191 master-0 kubenswrapper[26474]: I0223 13:15:25.187205 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187191 master-0 kubenswrapper[26474]: I0223 13:15:25.187232 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187191 master-0 kubenswrapper[26474]: I0223 13:15:25.187294 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187847 master-0 kubenswrapper[26474]: I0223 13:15:25.187323 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjl7\" (UniqueName: \"kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187847 master-0 kubenswrapper[26474]: I0223 13:15:25.187371 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.187847 master-0 kubenswrapper[26474]: I0223 13:15:25.187400 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.289845 master-0 kubenswrapper[26474]: I0223 13:15:25.289759 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.289845 master-0 kubenswrapper[26474]: I0223 13:15:25.289831 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.289845 master-0 kubenswrapper[26474]: I0223 13:15:25.289854 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.290108 master-0 kubenswrapper[26474]: I0223 13:15:25.289950 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.290442 master-0 kubenswrapper[26474]: I0223 13:15:25.290397 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjl7\" (UniqueName: \"kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.290791 master-0 kubenswrapper[26474]: I0223 13:15:25.290670 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.290791 master-0 kubenswrapper[26474]: I0223 13:15:25.290770 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.291517 master-0 kubenswrapper[26474]: I0223 13:15:25.291377 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.291737 master-0 kubenswrapper[26474]: I0223 13:15:25.291596 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.292205 master-0 kubenswrapper[26474]: I0223 13:15:25.292170 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.293329 master-0 kubenswrapper[26474]: I0223 13:15:25.293292 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.294324 master-0 kubenswrapper[26474]: I0223 13:15:25.294290 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.295909 master-0 kubenswrapper[26474]: I0223 13:15:25.295850 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.309607 master-0 kubenswrapper[26474]: I0223 13:15:25.309486 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjl7\" (UniqueName: \"kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7\") pod \"console-768998fb98-dpqwp\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.338155 master-0 kubenswrapper[26474]: I0223 13:15:25.338080 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:15:25.365464 master-0 kubenswrapper[26474]: W0223 13:15:25.365384 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b6a893_4c46_4b86_b233_192e81966dec.slice/crio-40e1b0457c82a8aa4a23862f510a8497ecff01a5c692ebcfa99a917a4688296b WatchSource:0}: Error finding container 40e1b0457c82a8aa4a23862f510a8497ecff01a5c692ebcfa99a917a4688296b: Status 404 returned error can't find the container with id 40e1b0457c82a8aa4a23862f510a8497ecff01a5c692ebcfa99a917a4688296b Feb 23 13:15:25.406747 master-0 kubenswrapper[26474]: I0223 13:15:25.406666 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:25.424455 master-0 kubenswrapper[26474]: I0223 13:15:25.424399 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" event={"ID":"b7b6a893-4c46-4b86-b233-192e81966dec","Type":"ContainerStarted","Data":"40e1b0457c82a8aa4a23862f510a8497ecff01a5c692ebcfa99a917a4688296b"} Feb 23 13:15:26.046723 master-0 kubenswrapper[26474]: I0223 13:15:26.046635 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:15:26.058804 master-0 kubenswrapper[26474]: W0223 13:15:26.058756 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b62106f_cfec_45a7_bec6_7a87612c1eb7.slice/crio-ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef WatchSource:0}: Error finding container ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef: Status 404 returned error can't find the container with id ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef Feb 23 13:15:26.457943 master-0 kubenswrapper[26474]: I0223 13:15:26.457717 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768998fb98-dpqwp" event={"ID":"0b62106f-cfec-45a7-bec6-7a87612c1eb7","Type":"ContainerStarted","Data":"ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef"} Feb 23 13:15:29.496986 master-0 kubenswrapper[26474]: I0223 13:15:29.496808 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" event={"ID":"b7b6a893-4c46-4b86-b233-192e81966dec","Type":"ContainerStarted","Data":"145d49f79b330a4c807885a2800e457a51c06f59224fa3faba8d3d4d6f16dbfd"} Feb 23 13:15:29.496986 master-0 kubenswrapper[26474]: I0223 13:15:29.496905 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:29.499482 master-0 kubenswrapper[26474]: I0223 13:15:29.499422 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dc5b68f6-vsffj" event={"ID":"68704d04-761c-464f-873e-657fb05b35f5","Type":"ContainerStarted","Data":"cb938b3ef459343cfeab6a345c4ee9cf39c212fc4c950b4bb5611411963d6730"} Feb 23 13:15:29.504720 master-0 kubenswrapper[26474]: I0223 13:15:29.502601 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768998fb98-dpqwp" event={"ID":"0b62106f-cfec-45a7-bec6-7a87612c1eb7","Type":"ContainerStarted","Data":"860ec94f783c3b653a3048dbdbe8687055c34d3047415d2575f5257d4a2f1cc0"} Feb 23 13:15:29.505613 master-0 kubenswrapper[26474]: I0223 13:15:29.505572 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:15:29.524278 master-0 kubenswrapper[26474]: I0223 13:15:29.524196 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" podStartSLOduration=9.980129689 podStartE2EDuration="13.524175147s" podCreationTimestamp="2026-02-23 13:15:16 +0000 UTC" firstStartedPulling="2026-02-23 13:15:25.370423799 +0000 UTC m=+47.216931476" lastFinishedPulling="2026-02-23 13:15:28.914469257 +0000 UTC m=+50.760976934" observedRunningTime="2026-02-23 13:15:29.522388694 +0000 UTC m=+51.368896381" watchObservedRunningTime="2026-02-23 13:15:29.524175147 +0000 UTC m=+51.370682824" Feb 23 13:15:29.580801 master-0 kubenswrapper[26474]: I0223 13:15:29.580679 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-768998fb98-dpqwp" podStartSLOduration=1.682556221 podStartE2EDuration="4.58065371s" podCreationTimestamp="2026-02-23 13:15:25 +0000 UTC" firstStartedPulling="2026-02-23 13:15:26.064655189 +0000 UTC m=+47.911162866" lastFinishedPulling="2026-02-23 13:15:28.962752678 +0000 UTC m=+50.809260355" observedRunningTime="2026-02-23 13:15:29.577476132 +0000 UTC m=+51.423983819" watchObservedRunningTime="2026-02-23 13:15:29.58065371 +0000 UTC m=+51.427161387" Feb 23 13:15:29.601450 master-0 kubenswrapper[26474]: I0223 13:15:29.601378 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57dc5b68f6-vsffj" podStartSLOduration=2.122384418 podStartE2EDuration="7.601354277s" podCreationTimestamp="2026-02-23 13:15:22 +0000 UTC" firstStartedPulling="2026-02-23 13:15:23.433318744 +0000 UTC m=+45.279826421" lastFinishedPulling="2026-02-23 13:15:28.912288603 +0000 UTC m=+50.758796280" observedRunningTime="2026-02-23 13:15:29.59904434 +0000 UTC m=+51.445552027" watchObservedRunningTime="2026-02-23 13:15:29.601354277 +0000 UTC m=+51.447861954" Feb 23 13:15:31.732614 master-0 kubenswrapper[26474]: I0223 13:15:31.731205 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:15:31.732614 master-0 kubenswrapper[26474]: I0223 13:15:31.731717 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" containerID="cri-o://05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247" gracePeriod=30 Feb 23 13:15:31.740582 master-0 kubenswrapper[26474]: I0223 13:15:31.738106 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:15:31.740582 master-0 kubenswrapper[26474]: I0223 13:15:31.738333 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" containerID="cri-o://4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca" gracePeriod=30 Feb 23 13:15:32.004635 master-0 kubenswrapper[26474]: E0223 13:15:32.004485 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c61886_6cc7_44aa_b56a_81cdcc670993.slice/crio-conmon-05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:15:32.260950 master-0 kubenswrapper[26474]: I0223 13:15:32.260805 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:15:32.270329 master-0 kubenswrapper[26474]: I0223 13:15:32.270276 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:15:32.277434 master-0 kubenswrapper[26474]: I0223 13:15:32.277407 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") pod \"f47fa225-93fd-458b-b450-a0411e629afd\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " Feb 23 13:15:32.277511 master-0 kubenswrapper[26474]: I0223 13:15:32.277488 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") pod \"d7c61886-6cc7-44aa-b56a-81cdcc670993\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " Feb 23 13:15:32.277586 master-0 kubenswrapper[26474]: I0223 13:15:32.277568 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") pod \"d7c61886-6cc7-44aa-b56a-81cdcc670993\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " Feb 23 13:15:32.277620 master-0 kubenswrapper[26474]: I0223 13:15:32.277603 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") pod \"f47fa225-93fd-458b-b450-a0411e629afd\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " Feb 23 13:15:32.277760 master-0 kubenswrapper[26474]: I0223 13:15:32.277744 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") pod \"d7c61886-6cc7-44aa-b56a-81cdcc670993\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " Feb 23 13:15:32.277804 master-0 kubenswrapper[26474]: I0223 13:15:32.277797 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") pod \"f47fa225-93fd-458b-b450-a0411e629afd\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " Feb 23 13:15:32.277837 master-0 kubenswrapper[26474]: I0223 13:15:32.277829 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") pod \"d7c61886-6cc7-44aa-b56a-81cdcc670993\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " Feb 23 13:15:32.277869 master-0 kubenswrapper[26474]: I0223 13:15:32.277848 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") pod \"f47fa225-93fd-458b-b450-a0411e629afd\" (UID: \"f47fa225-93fd-458b-b450-a0411e629afd\") " Feb 23 13:15:32.277917 master-0 kubenswrapper[26474]: I0223 13:15:32.277902 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") pod \"d7c61886-6cc7-44aa-b56a-81cdcc670993\" (UID: \"d7c61886-6cc7-44aa-b56a-81cdcc670993\") " Feb 23 13:15:32.278715 master-0 kubenswrapper[26474]: I0223 13:15:32.278663 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7c61886-6cc7-44aa-b56a-81cdcc670993" (UID: "d7c61886-6cc7-44aa-b56a-81cdcc670993"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:32.279136 master-0 kubenswrapper[26474]: I0223 13:15:32.279062 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d7c61886-6cc7-44aa-b56a-81cdcc670993" (UID: "d7c61886-6cc7-44aa-b56a-81cdcc670993"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:32.279591 master-0 kubenswrapper[26474]: I0223 13:15:32.279512 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca" (OuterVolumeSpecName: "client-ca") pod "f47fa225-93fd-458b-b450-a0411e629afd" (UID: "f47fa225-93fd-458b-b450-a0411e629afd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:32.279838 master-0 kubenswrapper[26474]: I0223 13:15:32.279804 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config" (OuterVolumeSpecName: "config") pod "d7c61886-6cc7-44aa-b56a-81cdcc670993" (UID: "d7c61886-6cc7-44aa-b56a-81cdcc670993"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:32.279931 master-0 kubenswrapper[26474]: I0223 13:15:32.279894 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config" (OuterVolumeSpecName: "config") pod "f47fa225-93fd-458b-b450-a0411e629afd" (UID: "f47fa225-93fd-458b-b450-a0411e629afd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:32.282564 master-0 kubenswrapper[26474]: I0223 13:15:32.282464 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2" (OuterVolumeSpecName: "kube-api-access-4mkd2") pod "f47fa225-93fd-458b-b450-a0411e629afd" (UID: "f47fa225-93fd-458b-b450-a0411e629afd"). InnerVolumeSpecName "kube-api-access-4mkd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:32.282564 master-0 kubenswrapper[26474]: I0223 13:15:32.282544 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn" (OuterVolumeSpecName: "kube-api-access-mq2rn") pod "d7c61886-6cc7-44aa-b56a-81cdcc670993" (UID: "d7c61886-6cc7-44aa-b56a-81cdcc670993"). InnerVolumeSpecName "kube-api-access-mq2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:32.283665 master-0 kubenswrapper[26474]: I0223 13:15:32.283568 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7c61886-6cc7-44aa-b56a-81cdcc670993" (UID: "d7c61886-6cc7-44aa-b56a-81cdcc670993"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:32.284248 master-0 kubenswrapper[26474]: I0223 13:15:32.284190 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f47fa225-93fd-458b-b450-a0411e629afd" (UID: "f47fa225-93fd-458b-b450-a0411e629afd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:32.379391 master-0 kubenswrapper[26474]: I0223 13:15:32.379307 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq2rn\" (UniqueName: \"kubernetes.io/projected/d7c61886-6cc7-44aa-b56a-81cdcc670993-kube-api-access-mq2rn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379391 master-0 kubenswrapper[26474]: I0223 13:15:32.379377 26474 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47fa225-93fd-458b-b450-a0411e629afd-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379391 master-0 kubenswrapper[26474]: I0223 13:15:32.379388 26474 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7c61886-6cc7-44aa-b56a-81cdcc670993-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379391 master-0 kubenswrapper[26474]: I0223 13:15:32.379401 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379391 master-0 kubenswrapper[26474]: I0223 13:15:32.379411 26474 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379715 master-0 kubenswrapper[26474]: I0223 13:15:32.379421 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mkd2\" (UniqueName: \"kubernetes.io/projected/f47fa225-93fd-458b-b450-a0411e629afd-kube-api-access-4mkd2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379715 master-0 kubenswrapper[26474]: I0223 13:15:32.379431 26474 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379715 master-0 kubenswrapper[26474]: I0223 13:15:32.379443 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7c61886-6cc7-44aa-b56a-81cdcc670993-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.379715 master-0 kubenswrapper[26474]: I0223 13:15:32.379452 26474 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f47fa225-93fd-458b-b450-a0411e629afd-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:32.530233 master-0 kubenswrapper[26474]: I0223 13:15:32.530109 26474 generic.go:334] "Generic (PLEG): container finished" podID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerID="05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247" exitCode=0 Feb 23 13:15:32.530233 master-0 kubenswrapper[26474]: I0223 13:15:32.530191 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerDied","Data":"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247"} Feb 23 13:15:32.530233 master-0 kubenswrapper[26474]: I0223 13:15:32.530220 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" event={"ID":"d7c61886-6cc7-44aa-b56a-81cdcc670993","Type":"ContainerDied","Data":"b933426682f905b163cdeceb81784d840d9932bd08aab494209ff2aa752893c3"} Feb 23 13:15:32.530233 master-0 kubenswrapper[26474]: I0223 13:15:32.530238 26474 scope.go:117] "RemoveContainer" containerID="05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247" Feb 23 13:15:32.530573 master-0 kubenswrapper[26474]: I0223 13:15:32.530317 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f44bb786-4zj6n" Feb 23 13:15:32.536306 master-0 kubenswrapper[26474]: I0223 13:15:32.536248 26474 generic.go:334] "Generic (PLEG): container finished" podID="f47fa225-93fd-458b-b450-a0411e629afd" containerID="4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca" exitCode=0 Feb 23 13:15:32.536306 master-0 kubenswrapper[26474]: I0223 13:15:32.536286 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerDied","Data":"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca"} Feb 23 13:15:32.536415 master-0 kubenswrapper[26474]: I0223 13:15:32.536313 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" Feb 23 13:15:32.536633 master-0 kubenswrapper[26474]: I0223 13:15:32.536319 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl" event={"ID":"f47fa225-93fd-458b-b450-a0411e629afd","Type":"ContainerDied","Data":"655e1b023cf7c57ce36ad89fb7b5ee982e50f51224c428832341448b6acdee46"} Feb 23 13:15:32.559135 master-0 kubenswrapper[26474]: I0223 13:15:32.558058 26474 scope.go:117] "RemoveContainer" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:15:32.559245 master-0 kubenswrapper[26474]: I0223 13:15:32.559213 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:15:32.562599 master-0 kubenswrapper[26474]: I0223 13:15:32.562555 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69f44bb786-4zj6n"] Feb 23 13:15:32.571803 master-0 kubenswrapper[26474]: I0223 13:15:32.571742 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:15:32.578005 master-0 kubenswrapper[26474]: I0223 13:15:32.577959 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648db577cf-2sqzl"] Feb 23 13:15:32.604848 master-0 kubenswrapper[26474]: I0223 13:15:32.604794 26474 scope.go:117] "RemoveContainer" containerID="05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247" Feb 23 13:15:32.605563 master-0 kubenswrapper[26474]: E0223 13:15:32.605503 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247\": container with ID starting with 05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247 not found: ID does not exist" containerID="05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247" Feb 23 13:15:32.605672 master-0 kubenswrapper[26474]: I0223 13:15:32.605584 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247"} err="failed to get container status \"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247\": rpc error: code = NotFound desc = could not find container \"05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247\": container with ID starting with 05a6ba24fd9905ca9d6db075649ebd7629508c24586e1a68a27a908e44c12247 not found: ID does not exist" Feb 23 13:15:32.605724 master-0 kubenswrapper[26474]: I0223 13:15:32.605673 26474 scope.go:117] "RemoveContainer" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:15:32.606460 master-0 kubenswrapper[26474]: E0223 13:15:32.606412 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9\": container with ID starting with 31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9 not found: ID does not exist" containerID="31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9" Feb 23 13:15:32.606539 master-0 kubenswrapper[26474]: I0223 13:15:32.606469 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9"} err="failed to get container status \"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9\": rpc error: code = NotFound desc = could not find container \"31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9\": container with ID starting with 31fef742f44ca1de040f3b3e56b9d4d5bddec605bc755ea2e75dc9460dea29c9 not found: ID does not exist" Feb 23 13:15:32.606539 master-0 kubenswrapper[26474]: I0223 13:15:32.606509 26474 scope.go:117] "RemoveContainer" containerID="4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca" Feb 23 13:15:32.628570 master-0 kubenswrapper[26474]: I0223 13:15:32.628528 26474 scope.go:117] "RemoveContainer" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: I0223 13:15:32.649113 26474 scope.go:117] "RemoveContainer" containerID="4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: E0223 13:15:32.649856 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca\": container with ID starting with 4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca not found: ID does not exist" containerID="4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: I0223 13:15:32.649915 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca"} err="failed to get container status \"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca\": rpc error: code = NotFound desc = could not find container \"4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca\": container with ID starting with 4ee152ac1c753c1aaa082e8d2ea1db0e93c566653cd980833ae8eb510814cfca not found: ID does not exist" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: I0223 13:15:32.649958 26474 scope.go:117] "RemoveContainer" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: E0223 13:15:32.650459 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b\": container with ID starting with 5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b not found: ID does not exist" containerID="5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b" Feb 23 13:15:32.652760 master-0 kubenswrapper[26474]: I0223 13:15:32.650488 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b"} err="failed to get container status \"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b\": rpc error: code = NotFound desc = could not find container \"5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b\": container with ID starting with 5205463618a71c75e31891d8b02b446bd187bc709b22b766034d6f94fbb6b94b not found: ID does not exist" Feb 23 13:15:32.973466 master-0 kubenswrapper[26474]: I0223 13:15:32.973384 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:32.973466 master-0 kubenswrapper[26474]: I0223 13:15:32.973431 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:15:32.973466 master-0 kubenswrapper[26474]: I0223 13:15:32.973425 26474 patch_prober.go:28] interesting pod/console-57dc5b68f6-vsffj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Feb 23 13:15:32.973466 master-0 kubenswrapper[26474]: I0223 13:15:32.973472 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57dc5b68f6-vsffj" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Feb 23 13:15:33.730987 master-0 kubenswrapper[26474]: I0223 13:15:33.730275 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99"] Feb 23 13:15:33.737004 master-0 kubenswrapper[26474]: E0223 13:15:33.736770 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.737004 master-0 kubenswrapper[26474]: I0223 13:15:33.737008 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: E0223 13:15:33.737035 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: I0223 13:15:33.737052 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: E0223 13:15:33.737064 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: I0223 13:15:33.737073 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: E0223 13:15:33.737139 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.737364 master-0 kubenswrapper[26474]: I0223 13:15:33.737155 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.738455 master-0 kubenswrapper[26474]: I0223 13:15:33.738419 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.738526 master-0 kubenswrapper[26474]: I0223 13:15:33.738475 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" containerName="controller-manager" Feb 23 13:15:33.738526 master-0 kubenswrapper[26474]: I0223 13:15:33.738507 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.748094 master-0 kubenswrapper[26474]: I0223 13:15:33.746665 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58859ff46b-2b8pf"] Feb 23 13:15:33.748094 master-0 kubenswrapper[26474]: I0223 13:15:33.746856 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.750498 master-0 kubenswrapper[26474]: I0223 13:15:33.749673 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47fa225-93fd-458b-b450-a0411e629afd" containerName="route-controller-manager" Feb 23 13:15:33.751069 master-0 kubenswrapper[26474]: I0223 13:15:33.751029 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:15:33.751264 master-0 kubenswrapper[26474]: I0223 13:15:33.751222 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:15:33.751525 master-0 kubenswrapper[26474]: I0223 13:15:33.751493 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:15:33.751623 master-0 kubenswrapper[26474]: I0223 13:15:33.751591 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9ppv8" Feb 23 13:15:33.752232 master-0 kubenswrapper[26474]: I0223 13:15:33.752194 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:15:33.752512 master-0 kubenswrapper[26474]: I0223 13:15:33.752477 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:15:33.753014 master-0 kubenswrapper[26474]: I0223 13:15:33.752964 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.758820 master-0 kubenswrapper[26474]: I0223 13:15:33.755892 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:15:33.758820 master-0 kubenswrapper[26474]: I0223 13:15:33.756049 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:15:33.758820 master-0 kubenswrapper[26474]: I0223 13:15:33.756619 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:15:33.758820 master-0 kubenswrapper[26474]: I0223 13:15:33.756771 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:15:33.758820 master-0 kubenswrapper[26474]: I0223 13:15:33.758303 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-klhdv" Feb 23 13:15:33.759358 master-0 kubenswrapper[26474]: I0223 13:15:33.759289 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:15:33.766861 master-0 kubenswrapper[26474]: I0223 13:15:33.766785 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99"] Feb 23 13:15:33.773996 master-0 kubenswrapper[26474]: I0223 13:15:33.773952 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:15:33.781649 master-0 kubenswrapper[26474]: I0223 13:15:33.781591 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58859ff46b-2b8pf"] Feb 23 13:15:33.804825 master-0 kubenswrapper[26474]: I0223 13:15:33.804758 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-config\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804833 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-client-ca\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804861 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82d8j\" (UniqueName: \"kubernetes.io/projected/0027bc95-9535-4f53-ada1-c24e56d8c0ca-kube-api-access-82d8j\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804884 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-serving-cert\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804903 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsdn\" (UniqueName: \"kubernetes.io/projected/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-kube-api-access-txsdn\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804923 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0027bc95-9535-4f53-ada1-c24e56d8c0ca-serving-cert\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804948 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-proxy-ca-bundles\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.805008 master-0 kubenswrapper[26474]: I0223 13:15:33.804989 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-config\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.805214 master-0 kubenswrapper[26474]: I0223 13:15:33.805022 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-client-ca\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906487 master-0 kubenswrapper[26474]: I0223 13:15:33.906405 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-proxy-ca-bundles\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906696 master-0 kubenswrapper[26474]: I0223 13:15:33.906504 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-config\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.906696 master-0 kubenswrapper[26474]: I0223 13:15:33.906566 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-client-ca\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906696 master-0 kubenswrapper[26474]: I0223 13:15:33.906591 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-config\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906696 master-0 kubenswrapper[26474]: I0223 13:15:33.906646 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-client-ca\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.906696 master-0 kubenswrapper[26474]: I0223 13:15:33.906681 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82d8j\" (UniqueName: \"kubernetes.io/projected/0027bc95-9535-4f53-ada1-c24e56d8c0ca-kube-api-access-82d8j\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.906907 master-0 kubenswrapper[26474]: I0223 13:15:33.906714 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-serving-cert\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906907 master-0 kubenswrapper[26474]: I0223 13:15:33.906749 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txsdn\" (UniqueName: \"kubernetes.io/projected/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-kube-api-access-txsdn\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.906907 master-0 kubenswrapper[26474]: I0223 13:15:33.906777 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0027bc95-9535-4f53-ada1-c24e56d8c0ca-serving-cert\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.909223 master-0 kubenswrapper[26474]: I0223 13:15:33.908083 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-client-ca\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.909223 master-0 kubenswrapper[26474]: I0223 13:15:33.908602 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-proxy-ca-bundles\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.909223 master-0 kubenswrapper[26474]: I0223 13:15:33.908962 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-client-ca\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.909936 master-0 kubenswrapper[26474]: I0223 13:15:33.909887 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0027bc95-9535-4f53-ada1-c24e56d8c0ca-config\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.910695 master-0 kubenswrapper[26474]: I0223 13:15:33.910605 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-config\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.910950 master-0 kubenswrapper[26474]: I0223 13:15:33.910898 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0027bc95-9535-4f53-ada1-c24e56d8c0ca-serving-cert\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:33.911989 master-0 kubenswrapper[26474]: I0223 13:15:33.911937 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-serving-cert\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.931041 master-0 kubenswrapper[26474]: I0223 13:15:33.929375 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txsdn\" (UniqueName: \"kubernetes.io/projected/5355312e-25b4-4ad2-8eb7-7c6289f3c46b-kube-api-access-txsdn\") pod \"controller-manager-58859ff46b-2b8pf\" (UID: \"5355312e-25b4-4ad2-8eb7-7c6289f3c46b\") " pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:33.931041 master-0 kubenswrapper[26474]: I0223 13:15:33.930417 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82d8j\" (UniqueName: \"kubernetes.io/projected/0027bc95-9535-4f53-ada1-c24e56d8c0ca-kube-api-access-82d8j\") pod \"route-controller-manager-fbc79d786-bnq99\" (UID: \"0027bc95-9535-4f53-ada1-c24e56d8c0ca\") " pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:34.086135 master-0 kubenswrapper[26474]: I0223 13:15:34.086054 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:34.103460 master-0 kubenswrapper[26474]: I0223 13:15:34.103412 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:34.406453 master-0 kubenswrapper[26474]: I0223 13:15:34.405330 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c61886-6cc7-44aa-b56a-81cdcc670993" path="/var/lib/kubelet/pods/d7c61886-6cc7-44aa-b56a-81cdcc670993/volumes" Feb 23 13:15:34.406453 master-0 kubenswrapper[26474]: I0223 13:15:34.406136 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47fa225-93fd-458b-b450-a0411e629afd" path="/var/lib/kubelet/pods/f47fa225-93fd-458b-b450-a0411e629afd/volumes" Feb 23 13:15:34.539806 master-0 kubenswrapper[26474]: I0223 13:15:34.539720 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99"] Feb 23 13:15:34.549995 master-0 kubenswrapper[26474]: W0223 13:15:34.549929 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0027bc95_9535_4f53_ada1_c24e56d8c0ca.slice/crio-317903e52d1f0e9c8d4c147e0bc5c2c3054621e3f155aa7540d1a664eeb08446 WatchSource:0}: Error finding container 317903e52d1f0e9c8d4c147e0bc5c2c3054621e3f155aa7540d1a664eeb08446: Status 404 returned error can't find the container with id 317903e52d1f0e9c8d4c147e0bc5c2c3054621e3f155aa7540d1a664eeb08446 Feb 23 13:15:34.616602 master-0 kubenswrapper[26474]: I0223 13:15:34.616505 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58859ff46b-2b8pf"] Feb 23 13:15:34.632176 master-0 kubenswrapper[26474]: W0223 13:15:34.632089 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5355312e_25b4_4ad2_8eb7_7c6289f3c46b.slice/crio-b6d238c8a4a950d2812db736d63c713b9cc2d4efd8bfd0638a39ab1894ae61dc WatchSource:0}: Error finding container b6d238c8a4a950d2812db736d63c713b9cc2d4efd8bfd0638a39ab1894ae61dc: Status 404 returned error can't find the container with id b6d238c8a4a950d2812db736d63c713b9cc2d4efd8bfd0638a39ab1894ae61dc Feb 23 13:15:35.407600 master-0 kubenswrapper[26474]: I0223 13:15:35.407489 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:35.407600 master-0 kubenswrapper[26474]: I0223 13:15:35.407569 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:15:35.410585 master-0 kubenswrapper[26474]: I0223 13:15:35.410553 26474 patch_prober.go:28] interesting pod/console-768998fb98-dpqwp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Feb 23 13:15:35.410675 master-0 kubenswrapper[26474]: I0223 13:15:35.410643 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-768998fb98-dpqwp" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Feb 23 13:15:35.568985 master-0 kubenswrapper[26474]: I0223 13:15:35.568893 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" event={"ID":"0027bc95-9535-4f53-ada1-c24e56d8c0ca","Type":"ContainerStarted","Data":"fee999fc2a67396d38ba15095a40a4c8eabc8a5cbc86b012d1ff41557629c8e7"} Feb 23 13:15:35.568985 master-0 kubenswrapper[26474]: I0223 13:15:35.568960 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" event={"ID":"0027bc95-9535-4f53-ada1-c24e56d8c0ca","Type":"ContainerStarted","Data":"317903e52d1f0e9c8d4c147e0bc5c2c3054621e3f155aa7540d1a664eeb08446"} Feb 23 13:15:35.569418 master-0 kubenswrapper[26474]: I0223 13:15:35.569256 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:35.573330 master-0 kubenswrapper[26474]: I0223 13:15:35.573240 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" event={"ID":"5355312e-25b4-4ad2-8eb7-7c6289f3c46b","Type":"ContainerStarted","Data":"92924279434f4d1ec4889123d3adcca2573f95b99eeabc3ac2dbdb6c38245751"} Feb 23 13:15:35.573514 master-0 kubenswrapper[26474]: I0223 13:15:35.573361 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" event={"ID":"5355312e-25b4-4ad2-8eb7-7c6289f3c46b","Type":"ContainerStarted","Data":"b6d238c8a4a950d2812db736d63c713b9cc2d4efd8bfd0638a39ab1894ae61dc"} Feb 23 13:15:35.573983 master-0 kubenswrapper[26474]: I0223 13:15:35.573815 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:35.583225 master-0 kubenswrapper[26474]: I0223 13:15:35.577176 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" Feb 23 13:15:35.583225 master-0 kubenswrapper[26474]: I0223 13:15:35.582356 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" Feb 23 13:15:35.600776 master-0 kubenswrapper[26474]: I0223 13:15:35.600666 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fbc79d786-bnq99" podStartSLOduration=4.600636167 podStartE2EDuration="4.600636167s" podCreationTimestamp="2026-02-23 13:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:35.598986497 +0000 UTC m=+57.445494184" watchObservedRunningTime="2026-02-23 13:15:35.600636167 +0000 UTC m=+57.447143844" Feb 23 13:15:35.633743 master-0 kubenswrapper[26474]: I0223 13:15:35.633599 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58859ff46b-2b8pf" podStartSLOduration=4.633307907 podStartE2EDuration="4.633307907s" podCreationTimestamp="2026-02-23 13:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:35.624683906 +0000 UTC m=+57.471191593" watchObservedRunningTime="2026-02-23 13:15:35.633307907 +0000 UTC m=+57.479815664" Feb 23 13:15:38.393444 master-0 kubenswrapper[26474]: I0223 13:15:38.393224 26474 scope.go:117] "RemoveContainer" containerID="6266f5fd682a0e1614165c124ec4bfc2e4e9278c8768f489236b9ce20082b0a0" Feb 23 13:15:38.457042 master-0 kubenswrapper[26474]: I0223 13:15:38.456973 26474 scope.go:117] "RemoveContainer" containerID="a6c6c79f23b0abea958a23a6a452ad603f2442cfcf12d274565330ccbe7468f8" Feb 23 13:15:38.485473 master-0 kubenswrapper[26474]: I0223 13:15:38.485421 26474 scope.go:117] "RemoveContainer" containerID="b4fac1a45391e1b8c8d33575e403cce50d3b72e24f353f507b5f94bf171c63ab" Feb 23 13:15:40.945142 master-0 kubenswrapper[26474]: I0223 13:15:40.945039 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:15:42.972570 master-0 kubenswrapper[26474]: I0223 13:15:42.972464 26474 patch_prober.go:28] interesting pod/console-57dc5b68f6-vsffj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Feb 23 13:15:42.972570 master-0 kubenswrapper[26474]: I0223 13:15:42.972560 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57dc5b68f6-vsffj" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Feb 23 13:15:45.408696 master-0 kubenswrapper[26474]: I0223 13:15:45.408576 26474 patch_prober.go:28] interesting pod/console-768998fb98-dpqwp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Feb 23 13:15:45.409663 master-0 kubenswrapper[26474]: I0223 13:15:45.408724 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-768998fb98-dpqwp" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Feb 23 13:15:46.713064 master-0 kubenswrapper[26474]: I0223 13:15:46.713001 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:15:46.780144 master-0 kubenswrapper[26474]: I0223 13:15:46.780073 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-dd5fdb7d7-wf5bd"] Feb 23 13:15:46.781141 master-0 kubenswrapper[26474]: I0223 13:15:46.781115 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.853990 master-0 kubenswrapper[26474]: I0223 13:15:46.853943 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zw9\" (UniqueName: \"kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.854290 master-0 kubenswrapper[26474]: I0223 13:15:46.854259 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.854564 master-0 kubenswrapper[26474]: I0223 13:15:46.854495 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.854670 master-0 kubenswrapper[26474]: I0223 13:15:46.854648 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.854714 master-0 kubenswrapper[26474]: I0223 13:15:46.854691 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.854754 master-0 kubenswrapper[26474]: I0223 13:15:46.854732 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.855038 master-0 kubenswrapper[26474]: I0223 13:15:46.854980 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968125 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9zw9\" (UniqueName: \"kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968208 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968225 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968251 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968290 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.968433 master-0 kubenswrapper[26474]: I0223 13:15:46.968331 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.972373 master-0 kubenswrapper[26474]: I0223 13:15:46.969192 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.972373 master-0 kubenswrapper[26474]: I0223 13:15:46.970194 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.972373 master-0 kubenswrapper[26474]: I0223 13:15:46.971824 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.972537 master-0 kubenswrapper[26474]: I0223 13:15:46.972395 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.972647 master-0 kubenswrapper[26474]: I0223 13:15:46.972507 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dd5fdb7d7-wf5bd"] Feb 23 13:15:46.976365 master-0 kubenswrapper[26474]: I0223 13:15:46.973859 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:46.985376 master-0 kubenswrapper[26474]: I0223 13:15:46.984572 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:47.279393 master-0 kubenswrapper[26474]: I0223 13:15:47.278846 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9zw9\" (UniqueName: \"kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9\") pod \"console-dd5fdb7d7-wf5bd\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:47.401091 master-0 kubenswrapper[26474]: I0223 13:15:47.401012 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:50.343865 master-0 kubenswrapper[26474]: I0223 13:15:50.343787 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:15:50.344878 master-0 kubenswrapper[26474]: I0223 13:15:50.344848 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.346950 master-0 kubenswrapper[26474]: I0223 13:15:50.346904 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-hj5xm" Feb 23 13:15:50.347648 master-0 kubenswrapper[26474]: I0223 13:15:50.347621 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:15:50.371726 master-0 kubenswrapper[26474]: I0223 13:15:50.371646 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:15:50.435411 master-0 kubenswrapper[26474]: I0223 13:15:50.435356 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.435674 master-0 kubenswrapper[26474]: I0223 13:15:50.435605 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.435993 master-0 kubenswrapper[26474]: I0223 13:15:50.435952 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.537273 master-0 kubenswrapper[26474]: I0223 13:15:50.537206 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.537593 master-0 kubenswrapper[26474]: I0223 13:15:50.537360 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.537820 master-0 kubenswrapper[26474]: I0223 13:15:50.537756 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.537954 master-0 kubenswrapper[26474]: I0223 13:15:50.537845 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.538058 master-0 kubenswrapper[26474]: I0223 13:15:50.538013 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.558207 master-0 kubenswrapper[26474]: I0223 13:15:50.558168 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access\") pod \"installer-4-master-0\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:50.707971 master-0 kubenswrapper[26474]: I0223 13:15:50.707719 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:15:52.569542 master-0 kubenswrapper[26474]: I0223 13:15:52.569472 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:15:52.573013 master-0 kubenswrapper[26474]: I0223 13:15:52.572964 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:15:52.670637 master-0 kubenswrapper[26474]: I0223 13:15:52.670563 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") pod \"27c1e327-cb40-4b36-b371-20d1271b8d8d\" (UID: \"27c1e327-cb40-4b36-b371-20d1271b8d8d\") " Feb 23 13:15:52.673231 master-0 kubenswrapper[26474]: I0223 13:15:52.673166 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "27c1e327-cb40-4b36-b371-20d1271b8d8d" (UID: "27c1e327-cb40-4b36-b371-20d1271b8d8d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:52.772188 master-0 kubenswrapper[26474]: I0223 13:15:52.772129 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27c1e327-cb40-4b36-b371-20d1271b8d8d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:15:55.407818 master-0 kubenswrapper[26474]: I0223 13:15:55.407621 26474 patch_prober.go:28] interesting pod/console-768998fb98-dpqwp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Feb 23 13:15:55.407818 master-0 kubenswrapper[26474]: I0223 13:15:55.407705 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-768998fb98-dpqwp" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Feb 23 13:15:55.561775 master-0 kubenswrapper[26474]: I0223 13:15:55.561708 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dd5fdb7d7-wf5bd"] Feb 23 13:15:55.567141 master-0 kubenswrapper[26474]: W0223 13:15:55.567041 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a0b32d2_df4f_44e9_a841_b7e925783400.slice/crio-eab3aaf7f85b40d5ba67f3292063011f53a7bbf03aa5b976d15016d60d02103c WatchSource:0}: Error finding container eab3aaf7f85b40d5ba67f3292063011f53a7bbf03aa5b976d15016d60d02103c: Status 404 returned error can't find the container with id eab3aaf7f85b40d5ba67f3292063011f53a7bbf03aa5b976d15016d60d02103c Feb 23 13:15:55.659150 master-0 kubenswrapper[26474]: I0223 13:15:55.659094 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:15:55.776665 master-0 kubenswrapper[26474]: I0223 13:15:55.776591 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd5fdb7d7-wf5bd" event={"ID":"3a0b32d2-df4f-44e9-a841-b7e925783400","Type":"ContainerStarted","Data":"8f7858f429fae1e5b86aa6190b59689851482fe73d0e2b5dcffd6f308650acaa"} Feb 23 13:15:55.776665 master-0 kubenswrapper[26474]: I0223 13:15:55.776663 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd5fdb7d7-wf5bd" event={"ID":"3a0b32d2-df4f-44e9-a841-b7e925783400","Type":"ContainerStarted","Data":"eab3aaf7f85b40d5ba67f3292063011f53a7bbf03aa5b976d15016d60d02103c"} Feb 23 13:15:55.784985 master-0 kubenswrapper[26474]: I0223 13:15:55.782373 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"379c746b-390e-4f41-9a3e-f2fc0eff3d64","Type":"ContainerStarted","Data":"a702d1d38fbbcd3378f86fa1b278dd915f180dcf81cd7b58037219ccd0ef8b38"} Feb 23 13:15:55.784985 master-0 kubenswrapper[26474]: I0223 13:15:55.784334 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-grk54" event={"ID":"449e8cbf-8db6-4709-b92f-a42410095ed2","Type":"ContainerStarted","Data":"5383e79a323af7253c723fc0f59b8f5656694e6d54d67193b916e6e7eddf25de"} Feb 23 13:15:55.785447 master-0 kubenswrapper[26474]: I0223 13:15:55.785316 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:15:55.790835 master-0 kubenswrapper[26474]: I0223 13:15:55.790797 26474 patch_prober.go:28] interesting pod/downloads-955b69498-grk54 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 23 13:15:55.790929 master-0 kubenswrapper[26474]: I0223 13:15:55.790867 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-grk54" podUID="449e8cbf-8db6-4709-b92f-a42410095ed2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 23 13:15:55.802238 master-0 kubenswrapper[26474]: I0223 13:15:55.802148 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-dd5fdb7d7-wf5bd" podStartSLOduration=9.802130197 podStartE2EDuration="9.802130197s" podCreationTimestamp="2026-02-23 13:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:55.799170554 +0000 UTC m=+77.645678251" watchObservedRunningTime="2026-02-23 13:15:55.802130197 +0000 UTC m=+77.648637884" Feb 23 13:15:55.827852 master-0 kubenswrapper[26474]: I0223 13:15:55.827735 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-grk54" podStartSLOduration=1.955175611 podStartE2EDuration="36.827711813s" podCreationTimestamp="2026-02-23 13:15:19 +0000 UTC" firstStartedPulling="2026-02-23 13:15:20.48807834 +0000 UTC m=+42.334586017" lastFinishedPulling="2026-02-23 13:15:55.360614542 +0000 UTC m=+77.207122219" observedRunningTime="2026-02-23 13:15:55.819046611 +0000 UTC m=+77.665554298" watchObservedRunningTime="2026-02-23 13:15:55.827711813 +0000 UTC m=+77.674219490" Feb 23 13:15:56.793669 master-0 kubenswrapper[26474]: I0223 13:15:56.793601 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"379c746b-390e-4f41-9a3e-f2fc0eff3d64","Type":"ContainerStarted","Data":"031a7f8a16d8a94120bee5d3b226b6d9cbba12b7bcbfb656a145bfd5fa9d7ebc"} Feb 23 13:15:56.794831 master-0 kubenswrapper[26474]: I0223 13:15:56.794776 26474 patch_prober.go:28] interesting pod/downloads-955b69498-grk54 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 23 13:15:56.794908 master-0 kubenswrapper[26474]: I0223 13:15:56.794868 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-grk54" podUID="449e8cbf-8db6-4709-b92f-a42410095ed2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 23 13:15:57.402048 master-0 kubenswrapper[26474]: I0223 13:15:57.401929 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:57.402458 master-0 kubenswrapper[26474]: I0223 13:15:57.402124 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:15:57.404622 master-0 kubenswrapper[26474]: I0223 13:15:57.404538 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:15:57.404769 master-0 kubenswrapper[26474]: I0223 13:15:57.404635 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:15:57.801395 master-0 kubenswrapper[26474]: I0223 13:15:57.801287 26474 patch_prober.go:28] interesting pod/downloads-955b69498-grk54 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 23 13:15:57.802251 master-0 kubenswrapper[26474]: I0223 13:15:57.801426 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-grk54" podUID="449e8cbf-8db6-4709-b92f-a42410095ed2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 23 13:16:00.066260 master-0 kubenswrapper[26474]: I0223 13:16:00.066099 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-grk54" Feb 23 13:16:00.477413 master-0 kubenswrapper[26474]: I0223 13:16:00.477148 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=10.47710801 podStartE2EDuration="10.47710801s" podCreationTimestamp="2026-02-23 13:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:56.822132138 +0000 UTC m=+78.668639825" watchObservedRunningTime="2026-02-23 13:16:00.47710801 +0000 UTC m=+82.323615717" Feb 23 13:16:05.408744 master-0 kubenswrapper[26474]: I0223 13:16:05.408683 26474 patch_prober.go:28] interesting pod/console-768998fb98-dpqwp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Feb 23 13:16:05.410243 master-0 kubenswrapper[26474]: I0223 13:16:05.409521 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-768998fb98-dpqwp" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Feb 23 13:16:05.980437 master-0 kubenswrapper[26474]: I0223 13:16:05.980299 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" podUID="b7b6a893-4c46-4b86-b233-192e81966dec" containerName="oauth-openshift" containerID="cri-o://145d49f79b330a4c807885a2800e457a51c06f59224fa3faba8d3d4d6f16dbfd" gracePeriod=15 Feb 23 13:16:06.888812 master-0 kubenswrapper[26474]: I0223 13:16:06.888684 26474 generic.go:334] "Generic (PLEG): container finished" podID="b7b6a893-4c46-4b86-b233-192e81966dec" containerID="145d49f79b330a4c807885a2800e457a51c06f59224fa3faba8d3d4d6f16dbfd" exitCode=0 Feb 23 13:16:06.888812 master-0 kubenswrapper[26474]: I0223 13:16:06.888785 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" event={"ID":"b7b6a893-4c46-4b86-b233-192e81966dec","Type":"ContainerDied","Data":"145d49f79b330a4c807885a2800e457a51c06f59224fa3faba8d3d4d6f16dbfd"} Feb 23 13:16:07.191822 master-0 kubenswrapper[26474]: I0223 13:16:07.191762 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:16:07.323057 master-0 kubenswrapper[26474]: I0223 13:16:07.322925 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323057 master-0 kubenswrapper[26474]: I0223 13:16:07.323028 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323057 master-0 kubenswrapper[26474]: I0223 13:16:07.323084 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323121 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323155 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323174 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323201 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323230 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vgzh\" (UniqueName: \"kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323249 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323264 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323295 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323315 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323355 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection\") pod \"b7b6a893-4c46-4b86-b233-192e81966dec\" (UID: \"b7b6a893-4c46-4b86-b233-192e81966dec\") " Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323771 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:07.323868 master-0 kubenswrapper[26474]: I0223 13:16:07.323861 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:07.325194 master-0 kubenswrapper[26474]: I0223 13:16:07.325017 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:07.325194 master-0 kubenswrapper[26474]: I0223 13:16:07.325090 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:07.326242 master-0 kubenswrapper[26474]: I0223 13:16:07.326136 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:07.328777 master-0 kubenswrapper[26474]: I0223 13:16:07.328665 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.329114 master-0 kubenswrapper[26474]: I0223 13:16:07.328865 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.329114 master-0 kubenswrapper[26474]: I0223 13:16:07.328944 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.329544 master-0 kubenswrapper[26474]: I0223 13:16:07.329500 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.330018 master-0 kubenswrapper[26474]: I0223 13:16:07.329954 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh" (OuterVolumeSpecName: "kube-api-access-5vgzh") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "kube-api-access-5vgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:16:07.334515 master-0 kubenswrapper[26474]: I0223 13:16:07.334443 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.334706 master-0 kubenswrapper[26474]: I0223 13:16:07.334594 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.335058 master-0 kubenswrapper[26474]: I0223 13:16:07.334496 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7b6a893-4c46-4b86-b233-192e81966dec" (UID: "b7b6a893-4c46-4b86-b233-192e81966dec"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:07.403308 master-0 kubenswrapper[26474]: I0223 13:16:07.403102 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:16:07.403308 master-0 kubenswrapper[26474]: I0223 13:16:07.403202 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425162 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425203 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425220 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425235 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425250 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425266 26474 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425280 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vgzh\" (UniqueName: \"kubernetes.io/projected/b7b6a893-4c46-4b86-b233-192e81966dec-kube-api-access-5vgzh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.425244 master-0 kubenswrapper[26474]: I0223 13:16:07.425293 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.426304 master-0 kubenswrapper[26474]: I0223 13:16:07.425310 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.426304 master-0 kubenswrapper[26474]: I0223 13:16:07.425327 26474 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7b6a893-4c46-4b86-b233-192e81966dec-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.426304 master-0 kubenswrapper[26474]: I0223 13:16:07.425356 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.426304 master-0 kubenswrapper[26474]: I0223 13:16:07.425370 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.426304 master-0 kubenswrapper[26474]: I0223 13:16:07.425386 26474 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b6a893-4c46-4b86-b233-192e81966dec-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:07.624656 master-0 kubenswrapper[26474]: I0223 13:16:07.624551 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:16:07.625593 master-0 kubenswrapper[26474]: I0223 13:16:07.624869 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" containerName="installer" containerID="cri-o://031a7f8a16d8a94120bee5d3b226b6d9cbba12b7bcbfb656a145bfd5fa9d7ebc" gracePeriod=30 Feb 23 13:16:07.824283 master-0 kubenswrapper[26474]: I0223 13:16:07.824102 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-646bd84bcd-mzrwt"] Feb 23 13:16:07.824502 master-0 kubenswrapper[26474]: E0223 13:16:07.824445 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b6a893-4c46-4b86-b233-192e81966dec" containerName="oauth-openshift" Feb 23 13:16:07.824502 master-0 kubenswrapper[26474]: I0223 13:16:07.824461 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b6a893-4c46-4b86-b233-192e81966dec" containerName="oauth-openshift" Feb 23 13:16:07.824652 master-0 kubenswrapper[26474]: I0223 13:16:07.824627 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b6a893-4c46-4b86-b233-192e81966dec" containerName="oauth-openshift" Feb 23 13:16:07.825128 master-0 kubenswrapper[26474]: I0223 13:16:07.825106 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.897291 master-0 kubenswrapper[26474]: I0223 13:16:07.897194 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" event={"ID":"b7b6a893-4c46-4b86-b233-192e81966dec","Type":"ContainerDied","Data":"40e1b0457c82a8aa4a23862f510a8497ecff01a5c692ebcfa99a917a4688296b"} Feb 23 13:16:07.897291 master-0 kubenswrapper[26474]: I0223 13:16:07.897273 26474 scope.go:117] "RemoveContainer" containerID="145d49f79b330a4c807885a2800e457a51c06f59224fa3faba8d3d4d6f16dbfd" Feb 23 13:16:07.897859 master-0 kubenswrapper[26474]: I0223 13:16:07.897404 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66d7b67b49-g8lgn" Feb 23 13:16:07.934784 master-0 kubenswrapper[26474]: I0223 13:16:07.934692 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.934915 master-0 kubenswrapper[26474]: I0223 13:16:07.934841 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.934915 master-0 kubenswrapper[26474]: I0223 13:16:07.934896 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935055 master-0 kubenswrapper[26474]: I0223 13:16:07.934939 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935055 master-0 kubenswrapper[26474]: I0223 13:16:07.935044 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-error\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935146 master-0 kubenswrapper[26474]: I0223 13:16:07.935093 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935146 master-0 kubenswrapper[26474]: I0223 13:16:07.935125 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17631a06-1002-4ff2-8d03-55948198b2ea-audit-dir\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935228 master-0 kubenswrapper[26474]: I0223 13:16:07.935184 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-login\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935276 master-0 kubenswrapper[26474]: I0223 13:16:07.935221 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tgp\" (UniqueName: \"kubernetes.io/projected/17631a06-1002-4ff2-8d03-55948198b2ea-kube-api-access-n4tgp\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935276 master-0 kubenswrapper[26474]: I0223 13:16:07.935259 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935426 master-0 kubenswrapper[26474]: I0223 13:16:07.935305 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935426 master-0 kubenswrapper[26474]: I0223 13:16:07.935380 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-session\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:07.935524 master-0 kubenswrapper[26474]: I0223 13:16:07.935425 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-audit-policies\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.037905 master-0 kubenswrapper[26474]: I0223 13:16:08.037800 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17631a06-1002-4ff2-8d03-55948198b2ea-audit-dir\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038189 master-0 kubenswrapper[26474]: I0223 13:16:08.037957 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-login\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038189 master-0 kubenswrapper[26474]: I0223 13:16:08.037994 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17631a06-1002-4ff2-8d03-55948198b2ea-audit-dir\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038189 master-0 kubenswrapper[26474]: I0223 13:16:08.038018 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tgp\" (UniqueName: \"kubernetes.io/projected/17631a06-1002-4ff2-8d03-55948198b2ea-kube-api-access-n4tgp\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038189 master-0 kubenswrapper[26474]: I0223 13:16:08.038135 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038392 master-0 kubenswrapper[26474]: I0223 13:16:08.038257 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038392 master-0 kubenswrapper[26474]: I0223 13:16:08.038321 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-session\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038473 master-0 kubenswrapper[26474]: I0223 13:16:08.038404 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-audit-policies\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038516 master-0 kubenswrapper[26474]: I0223 13:16:08.038480 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038733 master-0 kubenswrapper[26474]: I0223 13:16:08.038676 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038790 master-0 kubenswrapper[26474]: I0223 13:16:08.038743 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038872 master-0 kubenswrapper[26474]: I0223 13:16:08.038800 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.038919 master-0 kubenswrapper[26474]: I0223 13:16:08.038881 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-error\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.039605 master-0 kubenswrapper[26474]: I0223 13:16:08.039498 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.039851 master-0 kubenswrapper[26474]: I0223 13:16:08.039731 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-audit-policies\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.039914 master-0 kubenswrapper[26474]: I0223 13:16:08.039860 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.040532 master-0 kubenswrapper[26474]: I0223 13:16:08.040494 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.040782 master-0 kubenswrapper[26474]: I0223 13:16:08.040712 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.042585 master-0 kubenswrapper[26474]: I0223 13:16:08.042484 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-login\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.043274 master-0 kubenswrapper[26474]: I0223 13:16:08.042977 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.043274 master-0 kubenswrapper[26474]: I0223 13:16:08.043013 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.043274 master-0 kubenswrapper[26474]: I0223 13:16:08.043225 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-session\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.043714 master-0 kubenswrapper[26474]: I0223 13:16:08.043647 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.043829 master-0 kubenswrapper[26474]: I0223 13:16:08.043788 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.049807 master-0 kubenswrapper[26474]: I0223 13:16:08.049765 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17631a06-1002-4ff2-8d03-55948198b2ea-v4-0-config-user-template-error\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:08.534394 master-0 kubenswrapper[26474]: I0223 13:16:08.512649 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-646bd84bcd-mzrwt"] Feb 23 13:16:08.801772 master-0 kubenswrapper[26474]: I0223 13:16:08.801595 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tgp\" (UniqueName: \"kubernetes.io/projected/17631a06-1002-4ff2-8d03-55948198b2ea-kube-api-access-n4tgp\") pod \"oauth-openshift-646bd84bcd-mzrwt\" (UID: \"17631a06-1002-4ff2-8d03-55948198b2ea\") " pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:09.046393 master-0 kubenswrapper[26474]: I0223 13:16:09.046290 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:10.256681 master-0 kubenswrapper[26474]: I0223 13:16:10.256570 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:16:10.260196 master-0 kubenswrapper[26474]: I0223 13:16:10.260087 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-646bd84bcd-mzrwt"] Feb 23 13:16:10.537757 master-0 kubenswrapper[26474]: I0223 13:16:10.537673 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66d7b67b49-g8lgn"] Feb 23 13:16:10.926478 master-0 kubenswrapper[26474]: I0223 13:16:10.926389 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" event={"ID":"17631a06-1002-4ff2-8d03-55948198b2ea","Type":"ContainerStarted","Data":"ed93966e78468c68c8713e1c628a4b64880ce93824d5a8e708cfa8f3306ac892"} Feb 23 13:16:10.926478 master-0 kubenswrapper[26474]: I0223 13:16:10.926457 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" event={"ID":"17631a06-1002-4ff2-8d03-55948198b2ea","Type":"ContainerStarted","Data":"21f7193edb446192619c635251cbf54e14ec583fb0d97fda96eb452540fe2a9a"} Feb 23 13:16:10.965898 master-0 kubenswrapper[26474]: I0223 13:16:10.965743 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 23 13:16:10.966969 master-0 kubenswrapper[26474]: I0223 13:16:10.966902 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.007262 master-0 kubenswrapper[26474]: I0223 13:16:11.007142 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.007545 master-0 kubenswrapper[26474]: I0223 13:16:11.007285 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.007545 master-0 kubenswrapper[26474]: I0223 13:16:11.007416 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.109218 master-0 kubenswrapper[26474]: I0223 13:16:11.109121 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.109567 master-0 kubenswrapper[26474]: I0223 13:16:11.109295 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.109567 master-0 kubenswrapper[26474]: I0223 13:16:11.109395 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.109639 master-0 kubenswrapper[26474]: I0223 13:16:11.109602 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.109680 master-0 kubenswrapper[26474]: I0223 13:16:11.109654 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.129794 master-0 kubenswrapper[26474]: I0223 13:16:11.129711 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 23 13:16:11.583573 master-0 kubenswrapper[26474]: I0223 13:16:11.583528 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.599610 master-0 kubenswrapper[26474]: I0223 13:16:11.599574 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:16:11.764220 master-0 kubenswrapper[26474]: I0223 13:16:11.764158 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57dc5b68f6-vsffj" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" containerID="cri-o://cb938b3ef459343cfeab6a345c4ee9cf39c212fc4c950b4bb5611411963d6730" gracePeriod=15 Feb 23 13:16:11.934321 master-0 kubenswrapper[26474]: I0223 13:16:11.934225 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57dc5b68f6-vsffj_68704d04-761c-464f-873e-657fb05b35f5/console/0.log" Feb 23 13:16:11.934321 master-0 kubenswrapper[26474]: I0223 13:16:11.934308 26474 generic.go:334] "Generic (PLEG): container finished" podID="68704d04-761c-464f-873e-657fb05b35f5" containerID="cb938b3ef459343cfeab6a345c4ee9cf39c212fc4c950b4bb5611411963d6730" exitCode=2 Feb 23 13:16:11.934603 master-0 kubenswrapper[26474]: I0223 13:16:11.934362 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dc5b68f6-vsffj" event={"ID":"68704d04-761c-464f-873e-657fb05b35f5","Type":"ContainerDied","Data":"cb938b3ef459343cfeab6a345c4ee9cf39c212fc4c950b4bb5611411963d6730"} Feb 23 13:16:11.934661 master-0 kubenswrapper[26474]: I0223 13:16:11.934618 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:12.403419 master-0 kubenswrapper[26474]: I0223 13:16:12.403332 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b6a893-4c46-4b86-b233-192e81966dec" path="/var/lib/kubelet/pods/b7b6a893-4c46-4b86-b233-192e81966dec/volumes" Feb 23 13:16:12.911561 master-0 kubenswrapper[26474]: I0223 13:16:12.911493 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" Feb 23 13:16:12.915487 master-0 kubenswrapper[26474]: I0223 13:16:12.915319 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-646bd84bcd-mzrwt" podStartSLOduration=32.915301824 podStartE2EDuration="32.915301824s" podCreationTimestamp="2026-02-23 13:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:12.892490996 +0000 UTC m=+94.738998713" watchObservedRunningTime="2026-02-23 13:16:12.915301824 +0000 UTC m=+94.761809521" Feb 23 13:16:12.916181 master-0 kubenswrapper[26474]: I0223 13:16:12.916136 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 23 13:16:12.945937 master-0 kubenswrapper[26474]: I0223 13:16:12.945796 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"75f01779-caef-46f3-ac91-89f32798535b","Type":"ContainerStarted","Data":"bc4f80b11192ee4242694bfb33b8a867850c1044ab52570db83f2eb80271eee6"} Feb 23 13:16:13.041395 master-0 kubenswrapper[26474]: I0223 13:16:13.041355 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57dc5b68f6-vsffj_68704d04-761c-464f-873e-657fb05b35f5/console/0.log" Feb 23 13:16:13.041599 master-0 kubenswrapper[26474]: I0223 13:16:13.041433 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:16:13.144899 master-0 kubenswrapper[26474]: I0223 13:16:13.144829 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.144899 master-0 kubenswrapper[26474]: I0223 13:16:13.144890 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.145154 master-0 kubenswrapper[26474]: I0223 13:16:13.144927 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.145154 master-0 kubenswrapper[26474]: I0223 13:16:13.145008 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.145154 master-0 kubenswrapper[26474]: I0223 13:16:13.145118 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.145154 master-0 kubenswrapper[26474]: I0223 13:16:13.145145 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw8hq\" (UniqueName: \"kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq\") pod \"68704d04-761c-464f-873e-657fb05b35f5\" (UID: \"68704d04-761c-464f-873e-657fb05b35f5\") " Feb 23 13:16:13.145562 master-0 kubenswrapper[26474]: I0223 13:16:13.145508 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:13.145729 master-0 kubenswrapper[26474]: I0223 13:16:13.145681 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config" (OuterVolumeSpecName: "console-config") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:13.145729 master-0 kubenswrapper[26474]: I0223 13:16:13.145703 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:13.147974 master-0 kubenswrapper[26474]: I0223 13:16:13.147932 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:13.148037 master-0 kubenswrapper[26474]: I0223 13:16:13.147966 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq" (OuterVolumeSpecName: "kube-api-access-zw8hq") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "kube-api-access-zw8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:16:13.148191 master-0 kubenswrapper[26474]: I0223 13:16:13.148156 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "68704d04-761c-464f-873e-657fb05b35f5" (UID: "68704d04-761c-464f-873e-657fb05b35f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246602 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246659 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246670 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246682 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68704d04-761c-464f-873e-657fb05b35f5-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246690 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68704d04-761c-464f-873e-657fb05b35f5-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.253188 master-0 kubenswrapper[26474]: I0223 13:16:13.246699 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw8hq\" (UniqueName: \"kubernetes.io/projected/68704d04-761c-464f-873e-657fb05b35f5-kube-api-access-zw8hq\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:13.622624 master-0 kubenswrapper[26474]: I0223 13:16:13.622544 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tblcw"] Feb 23 13:16:13.625391 master-0 kubenswrapper[26474]: E0223 13:16:13.623238 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" Feb 23 13:16:13.625391 master-0 kubenswrapper[26474]: I0223 13:16:13.623262 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" Feb 23 13:16:13.625391 master-0 kubenswrapper[26474]: I0223 13:16:13.623488 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="68704d04-761c-464f-873e-657fb05b35f5" containerName="console" Feb 23 13:16:13.625391 master-0 kubenswrapper[26474]: I0223 13:16:13.623986 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.637374 master-0 kubenswrapper[26474]: I0223 13:16:13.630590 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 13:16:13.637374 master-0 kubenswrapper[26474]: I0223 13:16:13.630797 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 13:16:13.673376 master-0 kubenswrapper[26474]: I0223 13:16:13.667806 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tblcw"] Feb 23 13:16:13.756787 master-0 kubenswrapper[26474]: I0223 13:16:13.754817 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/62a7dee4-75b7-479a-b572-a8fb212b87eb-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.756787 master-0 kubenswrapper[26474]: I0223 13:16:13.754886 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/62a7dee4-75b7-479a-b572-a8fb212b87eb-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.797435 master-0 kubenswrapper[26474]: I0223 13:16:13.795972 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:16:13.857392 master-0 kubenswrapper[26474]: I0223 13:16:13.856919 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/62a7dee4-75b7-479a-b572-a8fb212b87eb-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.857392 master-0 kubenswrapper[26474]: I0223 13:16:13.857218 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/62a7dee4-75b7-479a-b572-a8fb212b87eb-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.859403 master-0 kubenswrapper[26474]: I0223 13:16:13.857962 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/62a7dee4-75b7-479a-b572-a8fb212b87eb-nginx-conf\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.861267 master-0 kubenswrapper[26474]: I0223 13:16:13.861215 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/62a7dee4-75b7-479a-b572-a8fb212b87eb-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-tblcw\" (UID: \"62a7dee4-75b7-479a-b572-a8fb212b87eb\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.921075 master-0 kubenswrapper[26474]: I0223 13:16:13.921006 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:16:13.922361 master-0 kubenswrapper[26474]: I0223 13:16:13.922272 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:13.952704 master-0 kubenswrapper[26474]: I0223 13:16:13.952649 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" Feb 23 13:16:13.956380 master-0 kubenswrapper[26474]: I0223 13:16:13.953428 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57dc5b68f6-vsffj_68704d04-761c-464f-873e-657fb05b35f5/console/0.log" Feb 23 13:16:13.956380 master-0 kubenswrapper[26474]: I0223 13:16:13.953554 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dc5b68f6-vsffj" Feb 23 13:16:13.956380 master-0 kubenswrapper[26474]: I0223 13:16:13.953555 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dc5b68f6-vsffj" event={"ID":"68704d04-761c-464f-873e-657fb05b35f5","Type":"ContainerDied","Data":"960409c6996094dcc4442cd7371eae66678c819a87eb052b88d899ad667d6894"} Feb 23 13:16:13.956380 master-0 kubenswrapper[26474]: I0223 13:16:13.953728 26474 scope.go:117] "RemoveContainer" containerID="cb938b3ef459343cfeab6a345c4ee9cf39c212fc4c950b4bb5611411963d6730" Feb 23 13:16:13.956989 master-0 kubenswrapper[26474]: I0223 13:16:13.956660 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"75f01779-caef-46f3-ac91-89f32798535b","Type":"ContainerStarted","Data":"6ae764c36f75fb31278af573e5a47b73bf398ccd1fd74fb456dba322065bc861"} Feb 23 13:16:14.018852 master-0 kubenswrapper[26474]: I0223 13:16:14.017509 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:16:14.061400 master-0 kubenswrapper[26474]: I0223 13:16:14.061322 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061411 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061433 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qsf\" (UniqueName: \"kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061469 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061504 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061554 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.061616 master-0 kubenswrapper[26474]: I0223 13:16:14.061586 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.066862 master-0 kubenswrapper[26474]: I0223 13:16:14.066724 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=4.06670555 podStartE2EDuration="4.06670555s" podCreationTimestamp="2026-02-23 13:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:14.022475168 +0000 UTC m=+95.868982845" watchObservedRunningTime="2026-02-23 13:16:14.06670555 +0000 UTC m=+95.913213227" Feb 23 13:16:14.067277 master-0 kubenswrapper[26474]: I0223 13:16:14.067249 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:16:14.083664 master-0 kubenswrapper[26474]: I0223 13:16:14.083581 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57dc5b68f6-vsffj"] Feb 23 13:16:14.169225 master-0 kubenswrapper[26474]: I0223 13:16:14.169148 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.169513 master-0 kubenswrapper[26474]: I0223 13:16:14.169240 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.169513 master-0 kubenswrapper[26474]: I0223 13:16:14.169283 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9qsf\" (UniqueName: \"kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.169513 master-0 kubenswrapper[26474]: I0223 13:16:14.169315 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.169513 master-0 kubenswrapper[26474]: I0223 13:16:14.169380 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.170275 master-0 kubenswrapper[26474]: I0223 13:16:14.170231 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.170371 master-0 kubenswrapper[26474]: I0223 13:16:14.170299 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.172207 master-0 kubenswrapper[26474]: I0223 13:16:14.172043 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.172670 master-0 kubenswrapper[26474]: I0223 13:16:14.172623 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.172814 master-0 kubenswrapper[26474]: I0223 13:16:14.172722 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.172951 master-0 kubenswrapper[26474]: I0223 13:16:14.172919 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.173303 master-0 kubenswrapper[26474]: I0223 13:16:14.173267 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.174914 master-0 kubenswrapper[26474]: I0223 13:16:14.174873 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.187082 master-0 kubenswrapper[26474]: I0223 13:16:14.187024 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9qsf\" (UniqueName: \"kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf\") pod \"console-7878b5757c-w9bdq\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.335383 master-0 kubenswrapper[26474]: I0223 13:16:14.335245 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:14.406365 master-0 kubenswrapper[26474]: I0223 13:16:14.403231 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68704d04-761c-464f-873e-657fb05b35f5" path="/var/lib/kubelet/pods/68704d04-761c-464f-873e-657fb05b35f5/volumes" Feb 23 13:16:14.406365 master-0 kubenswrapper[26474]: I0223 13:16:14.403907 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-tblcw"] Feb 23 13:16:14.406365 master-0 kubenswrapper[26474]: W0223 13:16:14.404540 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a7dee4_75b7_479a_b572_a8fb212b87eb.slice/crio-afc5f09c84334db2ea5eae30a4e99463f353955452e21cd1b55e18309f31736d WatchSource:0}: Error finding container afc5f09c84334db2ea5eae30a4e99463f353955452e21cd1b55e18309f31736d: Status 404 returned error can't find the container with id afc5f09c84334db2ea5eae30a4e99463f353955452e21cd1b55e18309f31736d Feb 23 13:16:14.805808 master-0 kubenswrapper[26474]: I0223 13:16:14.805731 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:16:14.817570 master-0 kubenswrapper[26474]: W0223 13:16:14.817271 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fd43ba_9c74_4497_aeab_d2c107eca1b1.slice/crio-341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1 WatchSource:0}: Error finding container 341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1: Status 404 returned error can't find the container with id 341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1 Feb 23 13:16:14.969669 master-0 kubenswrapper[26474]: I0223 13:16:14.969613 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7878b5757c-w9bdq" event={"ID":"64fd43ba-9c74-4497-aeab-d2c107eca1b1","Type":"ContainerStarted","Data":"341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1"} Feb 23 13:16:14.971404 master-0 kubenswrapper[26474]: I0223 13:16:14.971365 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" event={"ID":"62a7dee4-75b7-479a-b572-a8fb212b87eb","Type":"ContainerStarted","Data":"afc5f09c84334db2ea5eae30a4e99463f353955452e21cd1b55e18309f31736d"} Feb 23 13:16:15.985146 master-0 kubenswrapper[26474]: I0223 13:16:15.984794 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7878b5757c-w9bdq" event={"ID":"64fd43ba-9c74-4497-aeab-d2c107eca1b1","Type":"ContainerStarted","Data":"2bc4645d5f921657472c9470fe31be381cbfbd65864b0e00a74d81c3cd016156"} Feb 23 13:16:15.989941 master-0 kubenswrapper[26474]: I0223 13:16:15.987928 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" event={"ID":"62a7dee4-75b7-479a-b572-a8fb212b87eb","Type":"ContainerStarted","Data":"2be7a5122eeb17b3c7a136ff2ec35ca2728ed166a25647cfd821d47d6e073062"} Feb 23 13:16:16.015756 master-0 kubenswrapper[26474]: I0223 13:16:16.015636 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7878b5757c-w9bdq" podStartSLOduration=3.015604722 podStartE2EDuration="3.015604722s" podCreationTimestamp="2026-02-23 13:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:16.010635981 +0000 UTC m=+97.857143668" watchObservedRunningTime="2026-02-23 13:16:16.015604722 +0000 UTC m=+97.862112439" Feb 23 13:16:17.402597 master-0 kubenswrapper[26474]: I0223 13:16:17.402472 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:16:17.402597 master-0 kubenswrapper[26474]: I0223 13:16:17.402580 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:16:24.335857 master-0 kubenswrapper[26474]: I0223 13:16:24.335745 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:24.335857 master-0 kubenswrapper[26474]: I0223 13:16:24.335852 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:16:24.338150 master-0 kubenswrapper[26474]: I0223 13:16:24.338033 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:16:24.338321 master-0 kubenswrapper[26474]: I0223 13:16:24.338189 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:16:27.103624 master-0 kubenswrapper[26474]: I0223 13:16:27.103403 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_379c746b-390e-4f41-9a3e-f2fc0eff3d64/installer/0.log" Feb 23 13:16:27.103624 master-0 kubenswrapper[26474]: I0223 13:16:27.103500 26474 generic.go:334] "Generic (PLEG): container finished" podID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" containerID="031a7f8a16d8a94120bee5d3b226b6d9cbba12b7bcbfb656a145bfd5fa9d7ebc" exitCode=1 Feb 23 13:16:27.103624 master-0 kubenswrapper[26474]: I0223 13:16:27.103547 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"379c746b-390e-4f41-9a3e-f2fc0eff3d64","Type":"ContainerDied","Data":"031a7f8a16d8a94120bee5d3b226b6d9cbba12b7bcbfb656a145bfd5fa9d7ebc"} Feb 23 13:16:27.327895 master-0 kubenswrapper[26474]: I0223 13:16:27.327812 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_379c746b-390e-4f41-9a3e-f2fc0eff3d64/installer/0.log" Feb 23 13:16:27.327895 master-0 kubenswrapper[26474]: I0223 13:16:27.327905 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:16:27.351788 master-0 kubenswrapper[26474]: I0223 13:16:27.349739 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-tblcw" podStartSLOduration=13.087054676 podStartE2EDuration="14.349726545s" podCreationTimestamp="2026-02-23 13:16:13 +0000 UTC" firstStartedPulling="2026-02-23 13:16:14.408643088 +0000 UTC m=+96.255150785" lastFinishedPulling="2026-02-23 13:16:15.671314977 +0000 UTC m=+97.517822654" observedRunningTime="2026-02-23 13:16:16.033556742 +0000 UTC m=+97.880064479" watchObservedRunningTime="2026-02-23 13:16:27.349726545 +0000 UTC m=+109.196234222" Feb 23 13:16:27.402935 master-0 kubenswrapper[26474]: I0223 13:16:27.402713 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:16:27.402935 master-0 kubenswrapper[26474]: I0223 13:16:27.402837 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:16:27.411580 master-0 kubenswrapper[26474]: I0223 13:16:27.411481 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock\") pod \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " Feb 23 13:16:27.411860 master-0 kubenswrapper[26474]: I0223 13:16:27.411685 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir\") pod \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " Feb 23 13:16:27.411985 master-0 kubenswrapper[26474]: I0223 13:16:27.411898 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access\") pod \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\" (UID: \"379c746b-390e-4f41-9a3e-f2fc0eff3d64\") " Feb 23 13:16:27.412150 master-0 kubenswrapper[26474]: I0223 13:16:27.412048 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "379c746b-390e-4f41-9a3e-f2fc0eff3d64" (UID: "379c746b-390e-4f41-9a3e-f2fc0eff3d64"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:27.412246 master-0 kubenswrapper[26474]: I0223 13:16:27.411923 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock" (OuterVolumeSpecName: "var-lock") pod "379c746b-390e-4f41-9a3e-f2fc0eff3d64" (UID: "379c746b-390e-4f41-9a3e-f2fc0eff3d64"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:27.413448 master-0 kubenswrapper[26474]: I0223 13:16:27.412734 26474 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:27.413448 master-0 kubenswrapper[26474]: I0223 13:16:27.412783 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/379c746b-390e-4f41-9a3e-f2fc0eff3d64-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:27.415449 master-0 kubenswrapper[26474]: I0223 13:16:27.415385 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "379c746b-390e-4f41-9a3e-f2fc0eff3d64" (UID: "379c746b-390e-4f41-9a3e-f2fc0eff3d64"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:16:27.514555 master-0 kubenswrapper[26474]: I0223 13:16:27.514457 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379c746b-390e-4f41-9a3e-f2fc0eff3d64-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:28.115996 master-0 kubenswrapper[26474]: I0223 13:16:28.115907 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_379c746b-390e-4f41-9a3e-f2fc0eff3d64/installer/0.log" Feb 23 13:16:28.116774 master-0 kubenswrapper[26474]: I0223 13:16:28.116083 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"379c746b-390e-4f41-9a3e-f2fc0eff3d64","Type":"ContainerDied","Data":"a702d1d38fbbcd3378f86fa1b278dd915f180dcf81cd7b58037219ccd0ef8b38"} Feb 23 13:16:28.116774 master-0 kubenswrapper[26474]: I0223 13:16:28.116151 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:16:28.116774 master-0 kubenswrapper[26474]: I0223 13:16:28.116191 26474 scope.go:117] "RemoveContainer" containerID="031a7f8a16d8a94120bee5d3b226b6d9cbba12b7bcbfb656a145bfd5fa9d7ebc" Feb 23 13:16:28.172769 master-0 kubenswrapper[26474]: I0223 13:16:28.172683 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:16:28.188373 master-0 kubenswrapper[26474]: I0223 13:16:28.188273 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:16:28.413757 master-0 kubenswrapper[26474]: I0223 13:16:28.413076 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" path="/var/lib/kubelet/pods/379c746b-390e-4f41-9a3e-f2fc0eff3d64/volumes" Feb 23 13:16:34.336804 master-0 kubenswrapper[26474]: I0223 13:16:34.336735 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:16:34.337807 master-0 kubenswrapper[26474]: I0223 13:16:34.337633 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:16:37.401857 master-0 kubenswrapper[26474]: I0223 13:16:37.401723 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:16:37.401857 master-0 kubenswrapper[26474]: I0223 13:16:37.401826 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:16:38.841940 master-0 kubenswrapper[26474]: I0223 13:16:38.841823 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-768998fb98-dpqwp" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" containerID="cri-o://860ec94f783c3b653a3048dbdbe8687055c34d3047415d2575f5257d4a2f1cc0" gracePeriod=15 Feb 23 13:16:39.282688 master-0 kubenswrapper[26474]: I0223 13:16:39.282619 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-768998fb98-dpqwp_0b62106f-cfec-45a7-bec6-7a87612c1eb7/console/0.log" Feb 23 13:16:39.282819 master-0 kubenswrapper[26474]: I0223 13:16:39.282774 26474 generic.go:334] "Generic (PLEG): container finished" podID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerID="860ec94f783c3b653a3048dbdbe8687055c34d3047415d2575f5257d4a2f1cc0" exitCode=2 Feb 23 13:16:39.282874 master-0 kubenswrapper[26474]: I0223 13:16:39.282848 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768998fb98-dpqwp" event={"ID":"0b62106f-cfec-45a7-bec6-7a87612c1eb7","Type":"ContainerDied","Data":"860ec94f783c3b653a3048dbdbe8687055c34d3047415d2575f5257d4a2f1cc0"} Feb 23 13:16:39.282912 master-0 kubenswrapper[26474]: I0223 13:16:39.282896 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768998fb98-dpqwp" event={"ID":"0b62106f-cfec-45a7-bec6-7a87612c1eb7","Type":"ContainerDied","Data":"ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef"} Feb 23 13:16:39.282946 master-0 kubenswrapper[26474]: I0223 13:16:39.282914 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3c5d7a718c989abecd0d15c339df339a3c0be740a207d9473f7a5dee7ae4ef" Feb 23 13:16:39.308139 master-0 kubenswrapper[26474]: I0223 13:16:39.308083 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-768998fb98-dpqwp_0b62106f-cfec-45a7-bec6-7a87612c1eb7/console/0.log" Feb 23 13:16:39.308367 master-0 kubenswrapper[26474]: I0223 13:16:39.308195 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:16:39.424113 master-0 kubenswrapper[26474]: I0223 13:16:39.423991 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.424326 master-0 kubenswrapper[26474]: I0223 13:16:39.424183 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.424427 master-0 kubenswrapper[26474]: I0223 13:16:39.424397 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.424529 master-0 kubenswrapper[26474]: I0223 13:16:39.424486 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.424582 master-0 kubenswrapper[26474]: I0223 13:16:39.424562 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csjl7\" (UniqueName: \"kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.424801 master-0 kubenswrapper[26474]: I0223 13:16:39.424731 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.425074 master-0 kubenswrapper[26474]: I0223 13:16:39.425015 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config" (OuterVolumeSpecName: "console-config") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:39.425225 master-0 kubenswrapper[26474]: I0223 13:16:39.425184 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:39.425615 master-0 kubenswrapper[26474]: I0223 13:16:39.425515 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca\") pod \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\" (UID: \"0b62106f-cfec-45a7-bec6-7a87612c1eb7\") " Feb 23 13:16:39.426398 master-0 kubenswrapper[26474]: I0223 13:16:39.425574 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:39.426398 master-0 kubenswrapper[26474]: I0223 13:16:39.425862 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:16:39.426398 master-0 kubenswrapper[26474]: I0223 13:16:39.426328 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.426398 master-0 kubenswrapper[26474]: I0223 13:16:39.426380 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.426398 master-0 kubenswrapper[26474]: I0223 13:16:39.426399 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.426619 master-0 kubenswrapper[26474]: I0223 13:16:39.426417 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b62106f-cfec-45a7-bec6-7a87612c1eb7-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.426687 master-0 kubenswrapper[26474]: I0223 13:16:39.426657 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:39.428249 master-0 kubenswrapper[26474]: I0223 13:16:39.428199 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7" (OuterVolumeSpecName: "kube-api-access-csjl7") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "kube-api-access-csjl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:16:39.428945 master-0 kubenswrapper[26474]: I0223 13:16:39.428854 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0b62106f-cfec-45a7-bec6-7a87612c1eb7" (UID: "0b62106f-cfec-45a7-bec6-7a87612c1eb7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:16:39.528306 master-0 kubenswrapper[26474]: I0223 13:16:39.528164 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.528306 master-0 kubenswrapper[26474]: I0223 13:16:39.528306 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csjl7\" (UniqueName: \"kubernetes.io/projected/0b62106f-cfec-45a7-bec6-7a87612c1eb7-kube-api-access-csjl7\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:39.528992 master-0 kubenswrapper[26474]: I0223 13:16:39.528371 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0b62106f-cfec-45a7-bec6-7a87612c1eb7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:40.288747 master-0 kubenswrapper[26474]: I0223 13:16:40.288645 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768998fb98-dpqwp" Feb 23 13:16:40.324874 master-0 kubenswrapper[26474]: I0223 13:16:40.322822 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:16:40.327817 master-0 kubenswrapper[26474]: I0223 13:16:40.327763 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-768998fb98-dpqwp"] Feb 23 13:16:40.403058 master-0 kubenswrapper[26474]: I0223 13:16:40.402975 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" path="/var/lib/kubelet/pods/0b62106f-cfec-45a7-bec6-7a87612c1eb7/volumes" Feb 23 13:16:43.514625 master-0 kubenswrapper[26474]: I0223 13:16:43.514552 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: E0223 13:16:43.514852 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: I0223 13:16:43.514866 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: E0223 13:16:43.514893 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" containerName="installer" Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: I0223 13:16:43.514899 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" containerName="installer" Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: I0223 13:16:43.515050 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="379c746b-390e-4f41-9a3e-f2fc0eff3d64" containerName="installer" Feb 23 13:16:43.515533 master-0 kubenswrapper[26474]: I0223 13:16:43.515070 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b62106f-cfec-45a7-bec6-7a87612c1eb7" containerName="console" Feb 23 13:16:43.516978 master-0 kubenswrapper[26474]: I0223 13:16:43.516943 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.519530 master-0 kubenswrapper[26474]: I0223 13:16:43.519485 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:16:43.519628 master-0 kubenswrapper[26474]: I0223 13:16:43.519499 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:16:43.519886 master-0 kubenswrapper[26474]: I0223 13:16:43.519824 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:16:43.520072 master-0 kubenswrapper[26474]: I0223 13:16:43.519978 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:16:43.520140 master-0 kubenswrapper[26474]: I0223 13:16:43.520128 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:16:43.520369 master-0 kubenswrapper[26474]: I0223 13:16:43.520309 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:16:43.521464 master-0 kubenswrapper[26474]: I0223 13:16:43.521435 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:16:43.526373 master-0 kubenswrapper[26474]: I0223 13:16:43.526311 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:16:43.543514 master-0 kubenswrapper[26474]: I0223 13:16:43.543008 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:16:43.596518 master-0 kubenswrapper[26474]: I0223 13:16:43.596438 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596518 master-0 kubenswrapper[26474]: I0223 13:16:43.596497 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596553 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-config-out\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596583 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-web-config\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596607 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596639 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596678 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596705 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596739 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.596763 master-0 kubenswrapper[26474]: I0223 13:16:43.596765 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wc8\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-kube-api-access-64wc8\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.597031 master-0 kubenswrapper[26474]: I0223 13:16:43.596792 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.597031 master-0 kubenswrapper[26474]: I0223 13:16:43.596815 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-config-volume\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698050 master-0 kubenswrapper[26474]: I0223 13:16:43.697989 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698277 master-0 kubenswrapper[26474]: I0223 13:16:43.698061 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698277 master-0 kubenswrapper[26474]: I0223 13:16:43.698241 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-config-out\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698524 master-0 kubenswrapper[26474]: I0223 13:16:43.698306 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-web-config\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698729 master-0 kubenswrapper[26474]: I0223 13:16:43.698679 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698776 master-0 kubenswrapper[26474]: I0223 13:16:43.698712 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698831 master-0 kubenswrapper[26474]: I0223 13:16:43.698803 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698902 master-0 kubenswrapper[26474]: I0223 13:16:43.698883 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.698942 master-0 kubenswrapper[26474]: I0223 13:16:43.698925 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.699720 master-0 kubenswrapper[26474]: I0223 13:16:43.698970 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.699720 master-0 kubenswrapper[26474]: I0223 13:16:43.699001 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64wc8\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-kube-api-access-64wc8\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.699720 master-0 kubenswrapper[26474]: I0223 13:16:43.699022 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.699720 master-0 kubenswrapper[26474]: I0223 13:16:43.699046 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-config-volume\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.699903 master-0 kubenswrapper[26474]: I0223 13:16:43.699880 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.700407 master-0 kubenswrapper[26474]: I0223 13:16:43.700372 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2390c53-10a9-46d0-be73-d3ed303df396-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.701872 master-0 kubenswrapper[26474]: I0223 13:16:43.701843 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-web-config\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.702365 master-0 kubenswrapper[26474]: I0223 13:16:43.702295 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-config-volume\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.702478 master-0 kubenswrapper[26474]: I0223 13:16:43.702424 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.702575 master-0 kubenswrapper[26474]: I0223 13:16:43.702548 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.702642 master-0 kubenswrapper[26474]: I0223 13:16:43.702613 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a2390c53-10a9-46d0-be73-d3ed303df396-config-out\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.703421 master-0 kubenswrapper[26474]: I0223 13:16:43.703388 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.711788 master-0 kubenswrapper[26474]: I0223 13:16:43.711636 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.712533 master-0 kubenswrapper[26474]: I0223 13:16:43.712226 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a2390c53-10a9-46d0-be73-d3ed303df396-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.720527 master-0 kubenswrapper[26474]: I0223 13:16:43.720471 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64wc8\" (UniqueName: \"kubernetes.io/projected/a2390c53-10a9-46d0-be73-d3ed303df396-kube-api-access-64wc8\") pod \"alertmanager-main-0\" (UID: \"a2390c53-10a9-46d0-be73-d3ed303df396\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:43.838695 master-0 kubenswrapper[26474]: I0223 13:16:43.838546 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:16:44.301650 master-0 kubenswrapper[26474]: I0223 13:16:44.301574 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:16:44.315445 master-0 kubenswrapper[26474]: W0223 13:16:44.308393 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2390c53_10a9_46d0_be73_d3ed303df396.slice/crio-8ce23fab1546fc431dca9e6f7582d86243da07388dbae15198acbafc5a9b5b5c WatchSource:0}: Error finding container 8ce23fab1546fc431dca9e6f7582d86243da07388dbae15198acbafc5a9b5b5c: Status 404 returned error can't find the container with id 8ce23fab1546fc431dca9e6f7582d86243da07388dbae15198acbafc5a9b5b5c Feb 23 13:16:44.336123 master-0 kubenswrapper[26474]: I0223 13:16:44.336069 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:16:44.336296 master-0 kubenswrapper[26474]: I0223 13:16:44.336144 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:16:44.339461 master-0 kubenswrapper[26474]: I0223 13:16:44.339385 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"8ce23fab1546fc431dca9e6f7582d86243da07388dbae15198acbafc5a9b5b5c"} Feb 23 13:16:44.479749 master-0 kubenswrapper[26474]: I0223 13:16:44.479390 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-8595d4f886-qqtst"] Feb 23 13:16:44.481481 master-0 kubenswrapper[26474]: I0223 13:16:44.481247 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.488945 master-0 kubenswrapper[26474]: I0223 13:16:44.488878 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 23 13:16:44.489102 master-0 kubenswrapper[26474]: I0223 13:16:44.489024 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 23 13:16:44.489276 master-0 kubenswrapper[26474]: I0223 13:16:44.489220 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 23 13:16:44.489566 master-0 kubenswrapper[26474]: I0223 13:16:44.489248 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 23 13:16:44.489566 master-0 kubenswrapper[26474]: I0223 13:16:44.489243 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 23 13:16:44.489859 master-0 kubenswrapper[26474]: I0223 13:16:44.489838 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9una895oaglcl" Feb 23 13:16:44.508846 master-0 kubenswrapper[26474]: I0223 13:16:44.508783 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8595d4f886-qqtst"] Feb 23 13:16:44.515367 master-0 kubenswrapper[26474]: I0223 13:16:44.515286 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.515367 master-0 kubenswrapper[26474]: I0223 13:16:44.515378 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-grpc-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515408 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515441 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515654 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515745 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515891 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a71b5-a37b-418a-b602-8eb3a94566b3-metrics-client-ca\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.516136 master-0 kubenswrapper[26474]: I0223 13:16:44.515989 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47gv\" (UniqueName: \"kubernetes.io/projected/fa9a71b5-a37b-418a-b602-8eb3a94566b3-kube-api-access-q47gv\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618043 master-0 kubenswrapper[26474]: I0223 13:16:44.617926 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-grpc-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618043 master-0 kubenswrapper[26474]: I0223 13:16:44.618000 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618034 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618099 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618131 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618189 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a71b5-a37b-418a-b602-8eb3a94566b3-metrics-client-ca\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618232 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47gv\" (UniqueName: \"kubernetes.io/projected/fa9a71b5-a37b-418a-b602-8eb3a94566b3-kube-api-access-q47gv\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.618449 master-0 kubenswrapper[26474]: I0223 13:16:44.618270 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.620483 master-0 kubenswrapper[26474]: I0223 13:16:44.620420 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a71b5-a37b-418a-b602-8eb3a94566b3-metrics-client-ca\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.622246 master-0 kubenswrapper[26474]: I0223 13:16:44.622211 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.622359 master-0 kubenswrapper[26474]: I0223 13:16:44.622310 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.622984 master-0 kubenswrapper[26474]: I0223 13:16:44.622961 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-grpc-tls\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.624700 master-0 kubenswrapper[26474]: I0223 13:16:44.624645 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.624930 master-0 kubenswrapper[26474]: I0223 13:16:44.624888 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.630085 master-0 kubenswrapper[26474]: I0223 13:16:44.630040 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa9a71b5-a37b-418a-b602-8eb3a94566b3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.637425 master-0 kubenswrapper[26474]: I0223 13:16:44.637378 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47gv\" (UniqueName: \"kubernetes.io/projected/fa9a71b5-a37b-418a-b602-8eb3a94566b3-kube-api-access-q47gv\") pod \"thanos-querier-8595d4f886-qqtst\" (UID: \"fa9a71b5-a37b-418a-b602-8eb3a94566b3\") " pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:44.806190 master-0 kubenswrapper[26474]: I0223 13:16:44.806118 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:45.303591 master-0 kubenswrapper[26474]: I0223 13:16:45.303410 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8595d4f886-qqtst"] Feb 23 13:16:45.639361 master-0 kubenswrapper[26474]: W0223 13:16:45.639184 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9a71b5_a37b_418a_b602_8eb3a94566b3.slice/crio-4b0ebb61acb1ceef5b2a47d229be2aa5f3e74c9c36080274a0214231f90a8b27 WatchSource:0}: Error finding container 4b0ebb61acb1ceef5b2a47d229be2aa5f3e74c9c36080274a0214231f90a8b27: Status 404 returned error can't find the container with id 4b0ebb61acb1ceef5b2a47d229be2aa5f3e74c9c36080274a0214231f90a8b27 Feb 23 13:16:46.357989 master-0 kubenswrapper[26474]: I0223 13:16:46.357898 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"4b0ebb61acb1ceef5b2a47d229be2aa5f3e74c9c36080274a0214231f90a8b27"} Feb 23 13:16:46.361820 master-0 kubenswrapper[26474]: I0223 13:16:46.361775 26474 generic.go:334] "Generic (PLEG): container finished" podID="a2390c53-10a9-46d0-be73-d3ed303df396" containerID="17475e79a3b824ef4bc7d6e8f2099c148999aef1c333ce0be769cc7d434d1a39" exitCode=0 Feb 23 13:16:46.361820 master-0 kubenswrapper[26474]: I0223 13:16:46.361819 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerDied","Data":"17475e79a3b824ef4bc7d6e8f2099c148999aef1c333ce0be769cc7d434d1a39"} Feb 23 13:16:47.204744 master-0 kubenswrapper[26474]: I0223 13:16:47.204533 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-79f8868b4-qms96"] Feb 23 13:16:47.205814 master-0 kubenswrapper[26474]: I0223 13:16:47.205730 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.210056 master-0 kubenswrapper[26474]: I0223 13:16:47.209665 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-49pt2ng1l1349" Feb 23 13:16:47.219076 master-0 kubenswrapper[26474]: I0223 13:16:47.219006 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-79f8868b4-qms96"] Feb 23 13:16:47.247197 master-0 kubenswrapper[26474]: I0223 13:16:47.222282 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:16:47.247197 master-0 kubenswrapper[26474]: I0223 13:16:47.222719 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" podUID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" containerName="metrics-server" containerID="cri-o://3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438" gracePeriod=170 Feb 23 13:16:47.277862 master-0 kubenswrapper[26474]: I0223 13:16:47.277792 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-server-tls\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.278551 master-0 kubenswrapper[26474]: I0223 13:16:47.278514 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49gq\" (UniqueName: \"kubernetes.io/projected/572b4e84-443f-4a5e-9f3a-c92bc899c245-kube-api-access-n49gq\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.278654 master-0 kubenswrapper[26474]: I0223 13:16:47.278563 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.278654 master-0 kubenswrapper[26474]: I0223 13:16:47.278650 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-client-certs\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.278994 master-0 kubenswrapper[26474]: I0223 13:16:47.278928 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-client-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.279117 master-0 kubenswrapper[26474]: I0223 13:16:47.279089 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-metrics-server-audit-profiles\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.279238 master-0 kubenswrapper[26474]: I0223 13:16:47.279209 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/572b4e84-443f-4a5e-9f3a-c92bc899c245-audit-log\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381521 master-0 kubenswrapper[26474]: I0223 13:16:47.381460 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-client-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381811 master-0 kubenswrapper[26474]: I0223 13:16:47.381786 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-metrics-server-audit-profiles\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381871 master-0 kubenswrapper[26474]: I0223 13:16:47.381836 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/572b4e84-443f-4a5e-9f3a-c92bc899c245-audit-log\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381871 master-0 kubenswrapper[26474]: I0223 13:16:47.381855 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-server-tls\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381939 master-0 kubenswrapper[26474]: I0223 13:16:47.381878 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n49gq\" (UniqueName: \"kubernetes.io/projected/572b4e84-443f-4a5e-9f3a-c92bc899c245-kube-api-access-n49gq\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381939 master-0 kubenswrapper[26474]: I0223 13:16:47.381898 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.381939 master-0 kubenswrapper[26474]: I0223 13:16:47.381932 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-client-certs\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.383594 master-0 kubenswrapper[26474]: I0223 13:16:47.383562 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/572b4e84-443f-4a5e-9f3a-c92bc899c245-audit-log\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.383741 master-0 kubenswrapper[26474]: I0223 13:16:47.383707 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.384003 master-0 kubenswrapper[26474]: I0223 13:16:47.383950 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/572b4e84-443f-4a5e-9f3a-c92bc899c245-metrics-server-audit-profiles\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.389432 master-0 kubenswrapper[26474]: I0223 13:16:47.389377 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-server-tls\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.390301 master-0 kubenswrapper[26474]: I0223 13:16:47.390258 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-secret-metrics-client-certs\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.390883 master-0 kubenswrapper[26474]: I0223 13:16:47.390791 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572b4e84-443f-4a5e-9f3a-c92bc899c245-client-ca-bundle\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.405325 master-0 kubenswrapper[26474]: I0223 13:16:47.404307 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n49gq\" (UniqueName: \"kubernetes.io/projected/572b4e84-443f-4a5e-9f3a-c92bc899c245-kube-api-access-n49gq\") pod \"metrics-server-79f8868b4-qms96\" (UID: \"572b4e84-443f-4a5e-9f3a-c92bc899c245\") " pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.407380 master-0 kubenswrapper[26474]: I0223 13:16:47.406881 26474 patch_prober.go:28] interesting pod/console-dd5fdb7d7-wf5bd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 23 13:16:47.407380 master-0 kubenswrapper[26474]: I0223 13:16:47.406975 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 23 13:16:47.475835 master-0 kubenswrapper[26474]: I0223 13:16:47.474298 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2"] Feb 23 13:16:47.476083 master-0 kubenswrapper[26474]: I0223 13:16:47.475977 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.479049 master-0 kubenswrapper[26474]: I0223 13:16:47.478987 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 23 13:16:47.479269 master-0 kubenswrapper[26474]: I0223 13:16:47.479240 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 23 13:16:47.479455 master-0 kubenswrapper[26474]: I0223 13:16:47.479417 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 23 13:16:47.481262 master-0 kubenswrapper[26474]: I0223 13:16:47.479620 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 23 13:16:47.488886 master-0 kubenswrapper[26474]: I0223 13:16:47.488834 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 23 13:16:47.492844 master-0 kubenswrapper[26474]: I0223 13:16:47.492783 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 23 13:16:47.505892 master-0 kubenswrapper[26474]: I0223 13:16:47.505798 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2"] Feb 23 13:16:47.573867 master-0 kubenswrapper[26474]: I0223 13:16:47.573756 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:16:47.587583 master-0 kubenswrapper[26474]: I0223 13:16:47.587526 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.587683 master-0 kubenswrapper[26474]: I0223 13:16:47.587594 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-metrics-client-ca\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.587683 master-0 kubenswrapper[26474]: I0223 13:16:47.587656 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b75mp\" (UniqueName: \"kubernetes.io/projected/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-kube-api-access-b75mp\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.587935 master-0 kubenswrapper[26474]: I0223 13:16:47.587875 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.588169 master-0 kubenswrapper[26474]: I0223 13:16:47.588107 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-federate-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.588222 master-0 kubenswrapper[26474]: I0223 13:16:47.588194 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-serving-certs-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.588503 master-0 kubenswrapper[26474]: I0223 13:16:47.588410 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.588601 master-0 kubenswrapper[26474]: I0223 13:16:47.588559 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.691271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.691428 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-federate-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.691462 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-serving-certs-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.691493 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.691520 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692176 master-0 kubenswrapper[26474]: I0223 13:16:47.692129 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692898 master-0 kubenswrapper[26474]: I0223 13:16:47.692255 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-metrics-client-ca\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692898 master-0 kubenswrapper[26474]: I0223 13:16:47.692416 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b75mp\" (UniqueName: \"kubernetes.io/projected/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-kube-api-access-b75mp\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.692898 master-0 kubenswrapper[26474]: I0223 13:16:47.692794 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.697590 master-0 kubenswrapper[26474]: I0223 13:16:47.695196 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-serving-certs-ca-bundle\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.697773 master-0 kubenswrapper[26474]: I0223 13:16:47.697725 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-federate-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.698024 master-0 kubenswrapper[26474]: I0223 13:16:47.697975 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.698994 master-0 kubenswrapper[26474]: I0223 13:16:47.698933 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-telemeter-client-tls\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.702753 master-0 kubenswrapper[26474]: I0223 13:16:47.699372 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.713527 master-0 kubenswrapper[26474]: I0223 13:16:47.710816 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-metrics-client-ca\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.713527 master-0 kubenswrapper[26474]: I0223 13:16:47.713504 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b75mp\" (UniqueName: \"kubernetes.io/projected/ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa-kube-api-access-b75mp\") pod \"telemeter-client-6fd6fdc9d8-j4mb2\" (UID: \"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa\") " pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:47.802380 master-0 kubenswrapper[26474]: I0223 13:16:47.802257 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" Feb 23 13:16:48.795447 master-0 kubenswrapper[26474]: I0223 13:16:48.790395 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:16:48.798944 master-0 kubenswrapper[26474]: I0223 13:16:48.798873 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.803566 master-0 kubenswrapper[26474]: I0223 13:16:48.803517 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:16:48.804283 master-0 kubenswrapper[26474]: I0223 13:16:48.804214 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:16:48.804553 master-0 kubenswrapper[26474]: I0223 13:16:48.804507 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:16:48.804873 master-0 kubenswrapper[26474]: I0223 13:16:48.804808 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:16:48.805466 master-0 kubenswrapper[26474]: I0223 13:16:48.805399 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-69tp4ba64sllc" Feb 23 13:16:48.805835 master-0 kubenswrapper[26474]: I0223 13:16:48.805780 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:16:48.805994 master-0 kubenswrapper[26474]: I0223 13:16:48.805966 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:16:48.806287 master-0 kubenswrapper[26474]: I0223 13:16:48.806258 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:16:48.806612 master-0 kubenswrapper[26474]: I0223 13:16:48.806560 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:16:48.806889 master-0 kubenswrapper[26474]: I0223 13:16:48.806857 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:16:48.812482 master-0 kubenswrapper[26474]: I0223 13:16:48.810095 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:16:48.812979 master-0 kubenswrapper[26474]: I0223 13:16:48.812881 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:16:48.823032 master-0 kubenswrapper[26474]: I0223 13:16:48.822952 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:16:48.911838 master-0 kubenswrapper[26474]: I0223 13:16:48.911763 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.911838 master-0 kubenswrapper[26474]: I0223 13:16:48.911822 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.911838 master-0 kubenswrapper[26474]: I0223 13:16:48.911860 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911884 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911905 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911932 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911951 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911966 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbl84\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-kube-api-access-sbl84\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.911987 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912013 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config-out\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912033 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912052 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912071 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-web-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912131 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912155 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912176 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912210 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:48.912296 master-0 kubenswrapper[26474]: I0223 13:16:48.912231 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.013567 master-0 kubenswrapper[26474]: I0223 13:16:49.013460 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.013567 master-0 kubenswrapper[26474]: I0223 13:16:49.013536 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config-out\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.013567 master-0 kubenswrapper[26474]: I0223 13:16:49.013567 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.013567 master-0 kubenswrapper[26474]: I0223 13:16:49.013595 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-web-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013645 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013680 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013712 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013755 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013784 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013825 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013846 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013882 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013913 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013941 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.013975 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.014001 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.014026 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbl84\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-kube-api-access-sbl84\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014054 master-0 kubenswrapper[26474]: I0223 13:16:49.014057 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.014881 master-0 kubenswrapper[26474]: I0223 13:16:49.014844 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.018553 master-0 kubenswrapper[26474]: I0223 13:16:49.016165 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.018553 master-0 kubenswrapper[26474]: I0223 13:16:49.017834 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.018758 master-0 kubenswrapper[26474]: I0223 13:16:49.018573 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021171 master-0 kubenswrapper[26474]: I0223 13:16:49.019772 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021171 master-0 kubenswrapper[26474]: I0223 13:16:49.020292 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021171 master-0 kubenswrapper[26474]: I0223 13:16:49.020914 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config-out\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021171 master-0 kubenswrapper[26474]: I0223 13:16:49.021013 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021477 master-0 kubenswrapper[26474]: I0223 13:16:49.021408 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021640 master-0 kubenswrapper[26474]: I0223 13:16:49.021607 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021735 master-0 kubenswrapper[26474]: I0223 13:16:49.021649 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.021829 master-0 kubenswrapper[26474]: I0223 13:16:49.021792 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-web-config\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.022760 master-0 kubenswrapper[26474]: I0223 13:16:49.022716 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.022841 master-0 kubenswrapper[26474]: I0223 13:16:49.022780 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.022890 master-0 kubenswrapper[26474]: I0223 13:16:49.022842 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.026369 master-0 kubenswrapper[26474]: I0223 13:16:49.026320 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.026693 master-0 kubenswrapper[26474]: I0223 13:16:49.026636 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.035541 master-0 kubenswrapper[26474]: I0223 13:16:49.035488 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbl84\" (UniqueName: \"kubernetes.io/projected/fbca6439-68f9-4cac-b0e6-1f66ff0aa11f-kube-api-access-sbl84\") pod \"prometheus-k8s-0\" (UID: \"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.125512 master-0 kubenswrapper[26474]: I0223 13:16:49.125360 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:16:49.399322 master-0 kubenswrapper[26474]: I0223 13:16:49.399145 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"14c492fcea9edb04261260b8ec6601c669c44030add71b2d58cfeb0c5f9a1d47"} Feb 23 13:16:49.403596 master-0 kubenswrapper[26474]: I0223 13:16:49.403546 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"b1e5327502ef177c3fd06bdc0dd59671205fb85d1d9736c31c54d9b675d2f10c"} Feb 23 13:16:49.539010 master-0 kubenswrapper[26474]: I0223 13:16:49.537984 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-79f8868b4-qms96"] Feb 23 13:16:49.545573 master-0 kubenswrapper[26474]: W0223 13:16:49.545536 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572b4e84_443f_4a5e_9f3a_c92bc899c245.slice/crio-6472f21900f37081ee218e8257301295b9f96264b87223fcd6709b26de995104 WatchSource:0}: Error finding container 6472f21900f37081ee218e8257301295b9f96264b87223fcd6709b26de995104: Status 404 returned error can't find the container with id 6472f21900f37081ee218e8257301295b9f96264b87223fcd6709b26de995104 Feb 23 13:16:49.613273 master-0 kubenswrapper[26474]: I0223 13:16:49.613198 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2"] Feb 23 13:16:49.621229 master-0 kubenswrapper[26474]: W0223 13:16:49.621119 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5102bd_8d1e_4001_ba0c_6c1e2b3ca4fa.slice/crio-069ab5a5b7653e11c1b5d0b808ebb0f701de79449cc9e2cc8c7e3069c250c628 WatchSource:0}: Error finding container 069ab5a5b7653e11c1b5d0b808ebb0f701de79449cc9e2cc8c7e3069c250c628: Status 404 returned error can't find the container with id 069ab5a5b7653e11c1b5d0b808ebb0f701de79449cc9e2cc8c7e3069c250c628 Feb 23 13:16:49.724080 master-0 kubenswrapper[26474]: I0223 13:16:49.724018 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:16:49.737415 master-0 kubenswrapper[26474]: W0223 13:16:49.737350 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbca6439_68f9_4cac_b0e6_1f66ff0aa11f.slice/crio-800754fa2c5d657a5df9d6f0f46f48589ce7a545d565c582c3db9d237a9b1d87 WatchSource:0}: Error finding container 800754fa2c5d657a5df9d6f0f46f48589ce7a545d565c582c3db9d237a9b1d87: Status 404 returned error can't find the container with id 800754fa2c5d657a5df9d6f0f46f48589ce7a545d565c582c3db9d237a9b1d87 Feb 23 13:16:50.430186 master-0 kubenswrapper[26474]: I0223 13:16:50.429978 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" event={"ID":"572b4e84-443f-4a5e-9f3a-c92bc899c245","Type":"ContainerStarted","Data":"601203caae026fd74266f6c283f25e151d0809862b730896e9cd57abf77ac8cc"} Feb 23 13:16:50.430186 master-0 kubenswrapper[26474]: I0223 13:16:50.430072 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" event={"ID":"572b4e84-443f-4a5e-9f3a-c92bc899c245","Type":"ContainerStarted","Data":"6472f21900f37081ee218e8257301295b9f96264b87223fcd6709b26de995104"} Feb 23 13:16:50.433869 master-0 kubenswrapper[26474]: I0223 13:16:50.433806 26474 generic.go:334] "Generic (PLEG): container finished" podID="fbca6439-68f9-4cac-b0e6-1f66ff0aa11f" containerID="a385af0afeab6c1cf965680dc4cba713c74cc0c434b2c6ff2843e5d3f05d06a6" exitCode=0 Feb 23 13:16:50.434024 master-0 kubenswrapper[26474]: I0223 13:16:50.433877 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerDied","Data":"a385af0afeab6c1cf965680dc4cba713c74cc0c434b2c6ff2843e5d3f05d06a6"} Feb 23 13:16:50.434024 master-0 kubenswrapper[26474]: I0223 13:16:50.433963 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"800754fa2c5d657a5df9d6f0f46f48589ce7a545d565c582c3db9d237a9b1d87"} Feb 23 13:16:50.436488 master-0 kubenswrapper[26474]: I0223 13:16:50.436447 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" event={"ID":"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa","Type":"ContainerStarted","Data":"069ab5a5b7653e11c1b5d0b808ebb0f701de79449cc9e2cc8c7e3069c250c628"} Feb 23 13:16:50.445200 master-0 kubenswrapper[26474]: I0223 13:16:50.440825 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"0b2d4c68ada57aa8c6b03e93fd7d9ffa1c7091d0028a66ffeb59cbcef0a0d287"} Feb 23 13:16:50.445200 master-0 kubenswrapper[26474]: I0223 13:16:50.440902 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"23012c2156908d2b54894ad10873fe90d4af80e5789326df0b2c384caef2094b"} Feb 23 13:16:50.446944 master-0 kubenswrapper[26474]: I0223 13:16:50.446894 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"9872623949dd802ffb9ad6deea4279437f2cbd673d57bda610fc3581c353ff86"} Feb 23 13:16:50.446944 master-0 kubenswrapper[26474]: I0223 13:16:50.446935 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"4dafb7ba1e200cda042c8335a200ef8a3990777fff8068feaf499e134c7857ba"} Feb 23 13:16:50.446944 master-0 kubenswrapper[26474]: I0223 13:16:50.446946 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"28c4a97857ee5558b890cc64c624f8ad9811c4863cb04f628a5c5d511df6d61f"} Feb 23 13:16:50.447117 master-0 kubenswrapper[26474]: I0223 13:16:50.446956 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"6e810882dbf485fcbf48d6c16542d34ef95dc0fc1f561b0261f01651f2cb1596"} Feb 23 13:16:50.455916 master-0 kubenswrapper[26474]: I0223 13:16:50.455811 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" podStartSLOduration=3.455784866 podStartE2EDuration="3.455784866s" podCreationTimestamp="2026-02-23 13:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:50.451386338 +0000 UTC m=+132.297894035" watchObservedRunningTime="2026-02-23 13:16:50.455784866 +0000 UTC m=+132.302292553" Feb 23 13:16:51.458678 master-0 kubenswrapper[26474]: I0223 13:16:51.458561 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"110445263f82e432f7020c0dfe0e51fabc3ab2c31bad3b7d58c62927dea581f9"} Feb 23 13:16:51.458678 master-0 kubenswrapper[26474]: I0223 13:16:51.458624 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"9cc53535b1aa10cfaa8a38cfc90ff2a9e3eadcc78afcf1f9169a160d3a0ec7e6"} Feb 23 13:16:51.465301 master-0 kubenswrapper[26474]: I0223 13:16:51.465262 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a2390c53-10a9-46d0-be73-d3ed303df396","Type":"ContainerStarted","Data":"0afe6e69619fa65ea0205b844ac20c35ec979005294a474a249732e34404f65c"} Feb 23 13:16:51.505383 master-0 kubenswrapper[26474]: I0223 13:16:51.504861 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.041816946 podStartE2EDuration="8.50484007s" podCreationTimestamp="2026-02-23 13:16:43 +0000 UTC" firstStartedPulling="2026-02-23 13:16:44.31258471 +0000 UTC m=+126.159092407" lastFinishedPulling="2026-02-23 13:16:50.775607854 +0000 UTC m=+132.622115531" observedRunningTime="2026-02-23 13:16:51.504830119 +0000 UTC m=+133.351337816" watchObservedRunningTime="2026-02-23 13:16:51.50484007 +0000 UTC m=+133.351347747" Feb 23 13:16:52.477552 master-0 kubenswrapper[26474]: I0223 13:16:52.477468 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" event={"ID":"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa","Type":"ContainerStarted","Data":"7424264c7f5b74648b8129b20f69d13f48c8f332c24b5814a305bc0475cf1dc9"} Feb 23 13:16:52.477552 master-0 kubenswrapper[26474]: I0223 13:16:52.477546 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" event={"ID":"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa","Type":"ContainerStarted","Data":"48b12435c90c016525c9ed477175e9fb2bf5f54216f786176a37b4cb90d519d4"} Feb 23 13:16:52.477552 master-0 kubenswrapper[26474]: I0223 13:16:52.477558 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" event={"ID":"ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa","Type":"ContainerStarted","Data":"d23dc4f6ec4974c969ac46b115f797a0b16e1a83f0addbd62dd6e0fb8a1391a5"} Feb 23 13:16:52.485131 master-0 kubenswrapper[26474]: I0223 13:16:52.485047 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" event={"ID":"fa9a71b5-a37b-418a-b602-8eb3a94566b3","Type":"ContainerStarted","Data":"77083841d6395e743c7cfde3016531d934a517aa9f40f90fe1700f42e1b39ae6"} Feb 23 13:16:52.485887 master-0 kubenswrapper[26474]: I0223 13:16:52.485851 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:52.560465 master-0 kubenswrapper[26474]: I0223 13:16:52.560199 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" podStartSLOduration=3.435128282 podStartE2EDuration="8.560167408s" podCreationTimestamp="2026-02-23 13:16:44 +0000 UTC" firstStartedPulling="2026-02-23 13:16:45.645453404 +0000 UTC m=+127.491961081" lastFinishedPulling="2026-02-23 13:16:50.77049253 +0000 UTC m=+132.617000207" observedRunningTime="2026-02-23 13:16:52.553493464 +0000 UTC m=+134.400001181" watchObservedRunningTime="2026-02-23 13:16:52.560167408 +0000 UTC m=+134.406675095" Feb 23 13:16:52.563281 master-0 kubenswrapper[26474]: I0223 13:16:52.563157 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6fd6fdc9d8-j4mb2" podStartSLOduration=3.413863042 podStartE2EDuration="5.56314136s" podCreationTimestamp="2026-02-23 13:16:47 +0000 UTC" firstStartedPulling="2026-02-23 13:16:49.63277699 +0000 UTC m=+131.479284667" lastFinishedPulling="2026-02-23 13:16:51.782055308 +0000 UTC m=+133.628562985" observedRunningTime="2026-02-23 13:16:52.519817312 +0000 UTC m=+134.366325039" watchObservedRunningTime="2026-02-23 13:16:52.56314136 +0000 UTC m=+134.409649057" Feb 23 13:16:53.384057 master-0 kubenswrapper[26474]: I0223 13:16:53.383923 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-dd5fdb7d7-wf5bd"] Feb 23 13:16:53.442543 master-0 kubenswrapper[26474]: I0223 13:16:53.442468 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:16:53.444177 master-0 kubenswrapper[26474]: I0223 13:16:53.443889 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.456600 master-0 kubenswrapper[26474]: I0223 13:16:53.456310 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508316 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508400 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508483 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfqn7\" (UniqueName: \"kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508537 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508571 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508605 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.510371 master-0 kubenswrapper[26474]: I0223 13:16:53.508648 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612396 master-0 kubenswrapper[26474]: I0223 13:16:53.612292 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612632 master-0 kubenswrapper[26474]: I0223 13:16:53.612445 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612632 master-0 kubenswrapper[26474]: I0223 13:16:53.612505 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612632 master-0 kubenswrapper[26474]: I0223 13:16:53.612600 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612730 master-0 kubenswrapper[26474]: I0223 13:16:53.612704 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612761 master-0 kubenswrapper[26474]: I0223 13:16:53.612735 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.612863 master-0 kubenswrapper[26474]: I0223 13:16:53.612826 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfqn7\" (UniqueName: \"kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.614039 master-0 kubenswrapper[26474]: I0223 13:16:53.614007 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.614207 master-0 kubenswrapper[26474]: I0223 13:16:53.614173 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.614822 master-0 kubenswrapper[26474]: I0223 13:16:53.614791 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.615453 master-0 kubenswrapper[26474]: I0223 13:16:53.615398 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.618151 master-0 kubenswrapper[26474]: I0223 13:16:53.618108 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.618806 master-0 kubenswrapper[26474]: I0223 13:16:53.618756 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.634689 master-0 kubenswrapper[26474]: I0223 13:16:53.634574 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfqn7\" (UniqueName: \"kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7\") pod \"console-945f67446-9jmmn\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:53.768695 master-0 kubenswrapper[26474]: I0223 13:16:53.768568 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:16:54.337199 master-0 kubenswrapper[26474]: I0223 13:16:54.337110 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:16:54.337316 master-0 kubenswrapper[26474]: I0223 13:16:54.337245 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:16:54.505429 master-0 kubenswrapper[26474]: I0223 13:16:54.505131 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:16:54.517629 master-0 kubenswrapper[26474]: I0223 13:16:54.517539 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"47d403711c579eb5517614da91997f5d204599a574accf9ad74567e0f7e18b56"} Feb 23 13:16:54.518046 master-0 kubenswrapper[26474]: I0223 13:16:54.517632 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"296e41dde1167dc2e47fda6beb5cc46b8ec9740318cf6968239bff861f97329e"} Feb 23 13:16:54.544846 master-0 kubenswrapper[26474]: I0223 13:16:54.544773 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8595d4f886-qqtst" Feb 23 13:16:55.531590 master-0 kubenswrapper[26474]: I0223 13:16:55.531497 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"0bac4ca35a93d50f6a5d7bf51c147b873628b75bed639be2a5f3ce759a232c7a"} Feb 23 13:16:55.531590 master-0 kubenswrapper[26474]: I0223 13:16:55.531560 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"045c194c22aa38c73ab9e5574a8ce11aa8b9c598d35c074a67964d221cf68ac9"} Feb 23 13:16:55.531590 master-0 kubenswrapper[26474]: I0223 13:16:55.531572 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"bc4e505fdc7d99bc1148a849f29bf33fa2b8af0bf6a8defdbb54f689cc0275ff"} Feb 23 13:16:55.531590 master-0 kubenswrapper[26474]: I0223 13:16:55.531584 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fbca6439-68f9-4cac-b0e6-1f66ff0aa11f","Type":"ContainerStarted","Data":"24dbb29bfaea5aea1dda4b838cdc44af1c90b6191277b353e16c55954ce2ae5e"} Feb 23 13:16:55.534083 master-0 kubenswrapper[26474]: I0223 13:16:55.534013 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-945f67446-9jmmn" event={"ID":"ff605101-24a4-4034-b2c8-f8ca959464d5","Type":"ContainerStarted","Data":"08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91"} Feb 23 13:16:55.534154 master-0 kubenswrapper[26474]: I0223 13:16:55.534109 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-945f67446-9jmmn" event={"ID":"ff605101-24a4-4034-b2c8-f8ca959464d5","Type":"ContainerStarted","Data":"69e275c52b2c887290d26a4fcbcd25a868dbe2b6e43e4f542aede766f7316fbf"} Feb 23 13:16:55.574226 master-0 kubenswrapper[26474]: I0223 13:16:55.574107 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.8718813020000002 podStartE2EDuration="7.574088076s" podCreationTimestamp="2026-02-23 13:16:48 +0000 UTC" firstStartedPulling="2026-02-23 13:16:50.438102634 +0000 UTC m=+132.284610321" lastFinishedPulling="2026-02-23 13:16:54.140309418 +0000 UTC m=+135.986817095" observedRunningTime="2026-02-23 13:16:55.569991637 +0000 UTC m=+137.416499334" watchObservedRunningTime="2026-02-23 13:16:55.574088076 +0000 UTC m=+137.420595753" Feb 23 13:16:55.612438 master-0 kubenswrapper[26474]: I0223 13:16:55.607885 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-945f67446-9jmmn" podStartSLOduration=2.607864712 podStartE2EDuration="2.607864712s" podCreationTimestamp="2026-02-23 13:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:55.605300609 +0000 UTC m=+137.451808296" watchObservedRunningTime="2026-02-23 13:16:55.607864712 +0000 UTC m=+137.454372389" Feb 23 13:16:57.574768 master-0 kubenswrapper[26474]: I0223 13:16:57.574676 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-v7pp6"] Feb 23 13:16:57.580129 master-0 kubenswrapper[26474]: I0223 13:16:57.576524 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.584755 master-0 kubenswrapper[26474]: I0223 13:16:57.584691 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-kbbcm" Feb 23 13:16:57.593518 master-0 kubenswrapper[26474]: I0223 13:16:57.593456 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:16:57.694622 master-0 kubenswrapper[26474]: I0223 13:16:57.694202 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9xfl\" (UniqueName: \"kubernetes.io/projected/2fcfa52b-56eb-4399-88b8-5810794ad070-kube-api-access-s9xfl\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.695441 master-0 kubenswrapper[26474]: I0223 13:16:57.695281 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fcfa52b-56eb-4399-88b8-5810794ad070-serviceca\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.695873 master-0 kubenswrapper[26474]: I0223 13:16:57.695841 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fcfa52b-56eb-4399-88b8-5810794ad070-host\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.798938 master-0 kubenswrapper[26474]: I0223 13:16:57.798839 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fcfa52b-56eb-4399-88b8-5810794ad070-host\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.799190 master-0 kubenswrapper[26474]: I0223 13:16:57.798999 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9xfl\" (UniqueName: \"kubernetes.io/projected/2fcfa52b-56eb-4399-88b8-5810794ad070-kube-api-access-s9xfl\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.799190 master-0 kubenswrapper[26474]: I0223 13:16:57.799059 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fcfa52b-56eb-4399-88b8-5810794ad070-serviceca\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.799190 master-0 kubenswrapper[26474]: I0223 13:16:57.799044 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fcfa52b-56eb-4399-88b8-5810794ad070-host\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.800170 master-0 kubenswrapper[26474]: I0223 13:16:57.800109 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fcfa52b-56eb-4399-88b8-5810794ad070-serviceca\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.820913 master-0 kubenswrapper[26474]: I0223 13:16:57.820831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9xfl\" (UniqueName: \"kubernetes.io/projected/2fcfa52b-56eb-4399-88b8-5810794ad070-kube-api-access-s9xfl\") pod \"node-ca-v7pp6\" (UID: \"2fcfa52b-56eb-4399-88b8-5810794ad070\") " pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:57.934681 master-0 kubenswrapper[26474]: I0223 13:16:57.934494 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v7pp6" Feb 23 13:16:58.575982 master-0 kubenswrapper[26474]: I0223 13:16:58.575832 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v7pp6" event={"ID":"2fcfa52b-56eb-4399-88b8-5810794ad070","Type":"ContainerStarted","Data":"6b1a7cf98f3c697c907d5f5c1f5ddf6fdf2d3c7c6e7fb063261fa3c4a6253222"} Feb 23 13:16:59.126183 master-0 kubenswrapper[26474]: I0223 13:16:59.126094 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:17:00.606909 master-0 kubenswrapper[26474]: I0223 13:17:00.606810 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v7pp6" event={"ID":"2fcfa52b-56eb-4399-88b8-5810794ad070","Type":"ContainerStarted","Data":"123753960f8548a761f7c78306f03e66dfcbcbff0598559dbb9e5c92dea3fdc5"} Feb 23 13:17:00.632574 master-0 kubenswrapper[26474]: I0223 13:17:00.632422 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-v7pp6" podStartSLOduration=1.795449861 podStartE2EDuration="3.632401543s" podCreationTimestamp="2026-02-23 13:16:57 +0000 UTC" firstStartedPulling="2026-02-23 13:16:57.970912418 +0000 UTC m=+139.817420135" lastFinishedPulling="2026-02-23 13:16:59.80786414 +0000 UTC m=+141.654371817" observedRunningTime="2026-02-23 13:17:00.628214861 +0000 UTC m=+142.474722588" watchObservedRunningTime="2026-02-23 13:17:00.632401543 +0000 UTC m=+142.478909230" Feb 23 13:17:03.769614 master-0 kubenswrapper[26474]: I0223 13:17:03.769492 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:17:03.769614 master-0 kubenswrapper[26474]: I0223 13:17:03.769589 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:17:03.773490 master-0 kubenswrapper[26474]: I0223 13:17:03.773414 26474 patch_prober.go:28] interesting pod/console-945f67446-9jmmn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 23 13:17:03.773658 master-0 kubenswrapper[26474]: I0223 13:17:03.773500 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 23 13:17:04.391224 master-0 kubenswrapper[26474]: I0223 13:17:04.336885 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:17:04.391224 master-0 kubenswrapper[26474]: I0223 13:17:04.336975 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:17:07.575028 master-0 kubenswrapper[26474]: I0223 13:17:07.574833 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:17:07.575028 master-0 kubenswrapper[26474]: I0223 13:17:07.574919 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:17:11.103883 master-0 kubenswrapper[26474]: I0223 13:17:11.103804 26474 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:17:11.104883 master-0 kubenswrapper[26474]: I0223 13:17:11.104845 26474 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:17:11.105162 master-0 kubenswrapper[26474]: I0223 13:17:11.105087 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.105248 master-0 kubenswrapper[26474]: I0223 13:17:11.105205 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" containerID="cri-o://cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c" gracePeriod=15 Feb 23 13:17:11.105310 master-0 kubenswrapper[26474]: I0223 13:17:11.105229 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d" gracePeriod=15 Feb 23 13:17:11.105401 master-0 kubenswrapper[26474]: I0223 13:17:11.105318 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337" gracePeriod=15 Feb 23 13:17:11.105444 master-0 kubenswrapper[26474]: I0223 13:17:11.105392 26474 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:17:11.105488 master-0 kubenswrapper[26474]: I0223 13:17:11.105367 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57" gracePeriod=15 Feb 23 13:17:11.105520 master-0 kubenswrapper[26474]: I0223 13:17:11.105415 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94" gracePeriod=15 Feb 23 13:17:11.105775 master-0 kubenswrapper[26474]: E0223 13:17:11.105740 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:17:11.105775 master-0 kubenswrapper[26474]: I0223 13:17:11.105766 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105791 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105799 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105835 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105844 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105862 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105871 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105885 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105892 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105905 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105913 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: E0223 13:17:11.105926 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.105933 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.106078 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.106108 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.106127 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 23 13:17:11.106131 master-0 kubenswrapper[26474]: I0223 13:17:11.106142 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 23 13:17:11.106709 master-0 kubenswrapper[26474]: I0223 13:17:11.106168 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:17:11.106709 master-0 kubenswrapper[26474]: I0223 13:17:11.106220 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 23 13:17:11.113594 master-0 kubenswrapper[26474]: I0223 13:17:11.113545 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="eb342c942d3d92fd08ed7cf68fafb94c" podUID="487622064474ed0ec70f7bf2a0fcb80b" Feb 23 13:17:11.152993 master-0 kubenswrapper[26474]: I0223 13:17:11.152555 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:17:11.166246 master-0 kubenswrapper[26474]: I0223 13:17:11.166202 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.166362 master-0 kubenswrapper[26474]: I0223 13:17:11.166271 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.166362 master-0 kubenswrapper[26474]: I0223 13:17:11.166300 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.166362 master-0 kubenswrapper[26474]: I0223 13:17:11.166327 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.166472 master-0 kubenswrapper[26474]: I0223 13:17:11.166371 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.166502 master-0 kubenswrapper[26474]: I0223 13:17:11.166397 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.166603 master-0 kubenswrapper[26474]: I0223 13:17:11.166550 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.166660 master-0 kubenswrapper[26474]: I0223 13:17:11.166639 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.268487 master-0 kubenswrapper[26474]: I0223 13:17:11.268377 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.268487 master-0 kubenswrapper[26474]: I0223 13:17:11.268435 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.268710 master-0 kubenswrapper[26474]: I0223 13:17:11.268571 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.268744 master-0 kubenswrapper[26474]: I0223 13:17:11.268709 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269003 master-0 kubenswrapper[26474]: I0223 13:17:11.268960 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.269115 master-0 kubenswrapper[26474]: I0223 13:17:11.269082 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.269189 master-0 kubenswrapper[26474]: I0223 13:17:11.269157 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269274 master-0 kubenswrapper[26474]: I0223 13:17:11.269242 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269309 master-0 kubenswrapper[26474]: I0223 13:17:11.269289 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.269497 master-0 kubenswrapper[26474]: I0223 13:17:11.269458 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.269557 master-0 kubenswrapper[26474]: I0223 13:17:11.269517 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269614 master-0 kubenswrapper[26474]: I0223 13:17:11.269561 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269872 master-0 kubenswrapper[26474]: I0223 13:17:11.269678 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.269872 master-0 kubenswrapper[26474]: I0223 13:17:11.269805 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269872 master-0 kubenswrapper[26474]: I0223 13:17:11.269807 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.269872 master-0 kubenswrapper[26474]: I0223 13:17:11.269835 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:11.450288 master-0 kubenswrapper[26474]: I0223 13:17:11.450143 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:11.479809 master-0 kubenswrapper[26474]: W0223 13:17:11.479764 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2146f0e3671998cad8bbc2464b009ab7.slice/crio-9eeea75212318acb9f78a9367004e82a82ddcb29fb84fce45fd4a0d01bfba908 WatchSource:0}: Error finding container 9eeea75212318acb9f78a9367004e82a82ddcb29fb84fce45fd4a0d01bfba908: Status 404 returned error can't find the container with id 9eeea75212318acb9f78a9367004e82a82ddcb29fb84fce45fd4a0d01bfba908 Feb 23 13:17:11.483046 master-0 kubenswrapper[26474]: E0223 13:17:11.482894 26474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e294a5d5490c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:2146f0e3671998cad8bbc2464b009ab7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,LastTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:17:11.717401 master-0 kubenswrapper[26474]: I0223 13:17:11.717122 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"9eeea75212318acb9f78a9367004e82a82ddcb29fb84fce45fd4a0d01bfba908"} Feb 23 13:17:11.719198 master-0 kubenswrapper[26474]: I0223 13:17:11.719166 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/1.log" Feb 23 13:17:11.720416 master-0 kubenswrapper[26474]: I0223 13:17:11.720368 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 23 13:17:11.721040 master-0 kubenswrapper[26474]: I0223 13:17:11.721012 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d" exitCode=0 Feb 23 13:17:11.721040 master-0 kubenswrapper[26474]: I0223 13:17:11.721036 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57" exitCode=0 Feb 23 13:17:11.721153 master-0 kubenswrapper[26474]: I0223 13:17:11.721045 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337" exitCode=0 Feb 23 13:17:11.721153 master-0 kubenswrapper[26474]: I0223 13:17:11.721093 26474 scope.go:117] "RemoveContainer" containerID="f29c0801cb73a88db37a5dde38238b8a02b3aa465a16ef32b1a402a776062703" Feb 23 13:17:11.721219 master-0 kubenswrapper[26474]: I0223 13:17:11.721053 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94" exitCode=2 Feb 23 13:17:11.723794 master-0 kubenswrapper[26474]: I0223 13:17:11.723758 26474 generic.go:334] "Generic (PLEG): container finished" podID="75f01779-caef-46f3-ac91-89f32798535b" containerID="6ae764c36f75fb31278af573e5a47b73bf398ccd1fd74fb456dba322065bc861" exitCode=0 Feb 23 13:17:11.723854 master-0 kubenswrapper[26474]: I0223 13:17:11.723782 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"75f01779-caef-46f3-ac91-89f32798535b","Type":"ContainerDied","Data":"6ae764c36f75fb31278af573e5a47b73bf398ccd1fd74fb456dba322065bc861"} Feb 23 13:17:11.725097 master-0 kubenswrapper[26474]: I0223 13:17:11.725044 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:11.725860 master-0 kubenswrapper[26474]: I0223 13:17:11.725819 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.235776 master-0 kubenswrapper[26474]: E0223 13:17:12.235588 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085\\\"],\\\"sizeBytes\\\":2895784037},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657\\\"],\\\"sizeBytes\\\":633766177},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92\\\"],\\\"sizeBytes\\\":605597321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b\\\"],\\\"sizeBytes\\\":512134379},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1\\\"],\\\"sizeBytes\\\":502604403},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.236489 master-0 kubenswrapper[26474]: E0223 13:17:12.236452 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.237393 master-0 kubenswrapper[26474]: E0223 13:17:12.237367 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.237978 master-0 kubenswrapper[26474]: E0223 13:17:12.237951 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.238585 master-0 kubenswrapper[26474]: E0223 13:17:12.238553 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.238658 master-0 kubenswrapper[26474]: E0223 13:17:12.238586 26474 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:17:12.701672 master-0 kubenswrapper[26474]: E0223 13:17:12.701483 26474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e294a5d5490c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:2146f0e3671998cad8bbc2464b009ab7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,LastTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:17:12.734834 master-0 kubenswrapper[26474]: I0223 13:17:12.734680 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7"} Feb 23 13:17:12.737718 master-0 kubenswrapper[26474]: I0223 13:17:12.736727 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.739307 master-0 kubenswrapper[26474]: I0223 13:17:12.739228 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:12.739657 master-0 kubenswrapper[26474]: I0223 13:17:12.739593 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 23 13:17:13.250690 master-0 kubenswrapper[26474]: I0223 13:17:13.250638 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:17:13.262662 master-0 kubenswrapper[26474]: I0223 13:17:13.262430 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.265183 master-0 kubenswrapper[26474]: I0223 13:17:13.265126 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.325515 master-0 kubenswrapper[26474]: I0223 13:17:13.325464 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access\") pod \"75f01779-caef-46f3-ac91-89f32798535b\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " Feb 23 13:17:13.325515 master-0 kubenswrapper[26474]: I0223 13:17:13.325548 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir\") pod \"75f01779-caef-46f3-ac91-89f32798535b\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " Feb 23 13:17:13.325808 master-0 kubenswrapper[26474]: I0223 13:17:13.325619 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock\") pod \"75f01779-caef-46f3-ac91-89f32798535b\" (UID: \"75f01779-caef-46f3-ac91-89f32798535b\") " Feb 23 13:17:13.326241 master-0 kubenswrapper[26474]: I0223 13:17:13.325891 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75f01779-caef-46f3-ac91-89f32798535b" (UID: "75f01779-caef-46f3-ac91-89f32798535b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:13.326241 master-0 kubenswrapper[26474]: I0223 13:17:13.325970 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock" (OuterVolumeSpecName: "var-lock") pod "75f01779-caef-46f3-ac91-89f32798535b" (UID: "75f01779-caef-46f3-ac91-89f32798535b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:13.330413 master-0 kubenswrapper[26474]: I0223 13:17:13.330280 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75f01779-caef-46f3-ac91-89f32798535b" (UID: "75f01779-caef-46f3-ac91-89f32798535b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:13.427052 master-0 kubenswrapper[26474]: I0223 13:17:13.426919 26474 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.427052 master-0 kubenswrapper[26474]: I0223 13:17:13.426962 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f01779-caef-46f3-ac91-89f32798535b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.427052 master-0 kubenswrapper[26474]: I0223 13:17:13.426974 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f01779-caef-46f3-ac91-89f32798535b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.499491 master-0 kubenswrapper[26474]: I0223 13:17:13.499456 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 23 13:17:13.500167 master-0 kubenswrapper[26474]: I0223 13:17:13.500131 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:13.501083 master-0 kubenswrapper[26474]: I0223 13:17:13.501034 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.501466 master-0 kubenswrapper[26474]: I0223 13:17:13.501431 26474 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.501840 master-0 kubenswrapper[26474]: I0223 13:17:13.501796 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.629438 master-0 kubenswrapper[26474]: I0223 13:17:13.629319 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 23 13:17:13.629904 master-0 kubenswrapper[26474]: I0223 13:17:13.629431 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:13.630149 master-0 kubenswrapper[26474]: I0223 13:17:13.630109 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 23 13:17:13.630276 master-0 kubenswrapper[26474]: I0223 13:17:13.630173 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 23 13:17:13.630942 master-0 kubenswrapper[26474]: I0223 13:17:13.630523 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:13.630942 master-0 kubenswrapper[26474]: I0223 13:17:13.630570 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:13.630942 master-0 kubenswrapper[26474]: I0223 13:17:13.630836 26474 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.630942 master-0 kubenswrapper[26474]: I0223 13:17:13.630865 26474 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.630942 master-0 kubenswrapper[26474]: I0223 13:17:13.630887 26474 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:13.754039 master-0 kubenswrapper[26474]: I0223 13:17:13.753864 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"75f01779-caef-46f3-ac91-89f32798535b","Type":"ContainerDied","Data":"bc4f80b11192ee4242694bfb33b8a867850c1044ab52570db83f2eb80271eee6"} Feb 23 13:17:13.754039 master-0 kubenswrapper[26474]: I0223 13:17:13.753962 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4f80b11192ee4242694bfb33b8a867850c1044ab52570db83f2eb80271eee6" Feb 23 13:17:13.754039 master-0 kubenswrapper[26474]: I0223 13:17:13.753880 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 23 13:17:13.759540 master-0 kubenswrapper[26474]: I0223 13:17:13.759488 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 23 13:17:13.760796 master-0 kubenswrapper[26474]: I0223 13:17:13.760745 26474 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c" exitCode=0 Feb 23 13:17:13.761063 master-0 kubenswrapper[26474]: I0223 13:17:13.760890 26474 scope.go:117] "RemoveContainer" containerID="98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d" Feb 23 13:17:13.761195 master-0 kubenswrapper[26474]: I0223 13:17:13.760925 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:13.770472 master-0 kubenswrapper[26474]: I0223 13:17:13.770424 26474 patch_prober.go:28] interesting pod/console-945f67446-9jmmn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 23 13:17:13.770472 master-0 kubenswrapper[26474]: I0223 13:17:13.770461 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 23 13:17:13.790743 master-0 kubenswrapper[26474]: I0223 13:17:13.790335 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.792179 master-0 kubenswrapper[26474]: I0223 13:17:13.791972 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.793308 master-0 kubenswrapper[26474]: I0223 13:17:13.793235 26474 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.794293 master-0 kubenswrapper[26474]: I0223 13:17:13.794225 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.795117 master-0 kubenswrapper[26474]: I0223 13:17:13.795048 26474 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.795899 master-0 kubenswrapper[26474]: I0223 13:17:13.795846 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:13.797460 master-0 kubenswrapper[26474]: I0223 13:17:13.797422 26474 scope.go:117] "RemoveContainer" containerID="eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57" Feb 23 13:17:13.821290 master-0 kubenswrapper[26474]: I0223 13:17:13.821247 26474 scope.go:117] "RemoveContainer" containerID="f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337" Feb 23 13:17:13.849214 master-0 kubenswrapper[26474]: I0223 13:17:13.849159 26474 scope.go:117] "RemoveContainer" containerID="c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94" Feb 23 13:17:13.873853 master-0 kubenswrapper[26474]: I0223 13:17:13.873817 26474 scope.go:117] "RemoveContainer" containerID="cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c" Feb 23 13:17:13.903002 master-0 kubenswrapper[26474]: I0223 13:17:13.902655 26474 scope.go:117] "RemoveContainer" containerID="2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757" Feb 23 13:17:13.945165 master-0 kubenswrapper[26474]: I0223 13:17:13.945115 26474 scope.go:117] "RemoveContainer" containerID="98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d" Feb 23 13:17:13.945927 master-0 kubenswrapper[26474]: E0223 13:17:13.945871 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d\": container with ID starting with 98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d not found: ID does not exist" containerID="98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d" Feb 23 13:17:13.946024 master-0 kubenswrapper[26474]: I0223 13:17:13.945922 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d"} err="failed to get container status \"98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d\": rpc error: code = NotFound desc = could not find container \"98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d\": container with ID starting with 98cdf54dec1d913d79964d549df442e1459c235fce40288ac3844b3fcb6dcb5d not found: ID does not exist" Feb 23 13:17:13.946024 master-0 kubenswrapper[26474]: I0223 13:17:13.945953 26474 scope.go:117] "RemoveContainer" containerID="eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57" Feb 23 13:17:13.946496 master-0 kubenswrapper[26474]: E0223 13:17:13.946450 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57\": container with ID starting with eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57 not found: ID does not exist" containerID="eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57" Feb 23 13:17:13.946681 master-0 kubenswrapper[26474]: I0223 13:17:13.946631 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57"} err="failed to get container status \"eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57\": rpc error: code = NotFound desc = could not find container \"eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57\": container with ID starting with eaba304d54459043c2fb3843d04f68700d01cdea418e80052c39c8b0b5599d57 not found: ID does not exist" Feb 23 13:17:13.946813 master-0 kubenswrapper[26474]: I0223 13:17:13.946789 26474 scope.go:117] "RemoveContainer" containerID="f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337" Feb 23 13:17:13.947418 master-0 kubenswrapper[26474]: E0223 13:17:13.947387 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337\": container with ID starting with f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337 not found: ID does not exist" containerID="f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337" Feb 23 13:17:13.947530 master-0 kubenswrapper[26474]: I0223 13:17:13.947418 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337"} err="failed to get container status \"f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337\": rpc error: code = NotFound desc = could not find container \"f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337\": container with ID starting with f573c2e5230d65d8dc8ba6b54ac190a99cddc261a6d1f0f017402ee2aafcc337 not found: ID does not exist" Feb 23 13:17:13.947530 master-0 kubenswrapper[26474]: I0223 13:17:13.947436 26474 scope.go:117] "RemoveContainer" containerID="c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94" Feb 23 13:17:13.947965 master-0 kubenswrapper[26474]: E0223 13:17:13.947920 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94\": container with ID starting with c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94 not found: ID does not exist" containerID="c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94" Feb 23 13:17:13.947965 master-0 kubenswrapper[26474]: I0223 13:17:13.947954 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94"} err="failed to get container status \"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94\": rpc error: code = NotFound desc = could not find container \"c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94\": container with ID starting with c64f6fcd6d413ff04b2bb6c8cd0b3923f91edcc957ca4abb4c84ba9a9c4dff94 not found: ID does not exist" Feb 23 13:17:13.948150 master-0 kubenswrapper[26474]: I0223 13:17:13.947979 26474 scope.go:117] "RemoveContainer" containerID="cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c" Feb 23 13:17:13.948516 master-0 kubenswrapper[26474]: E0223 13:17:13.948481 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c\": container with ID starting with cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c not found: ID does not exist" containerID="cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c" Feb 23 13:17:13.948696 master-0 kubenswrapper[26474]: I0223 13:17:13.948645 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c"} err="failed to get container status \"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c\": rpc error: code = NotFound desc = could not find container \"cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c\": container with ID starting with cc7b5d68de9f03edf827e5955150752524817c3a80a882a5566b62cc23dd806c not found: ID does not exist" Feb 23 13:17:13.948815 master-0 kubenswrapper[26474]: I0223 13:17:13.948793 26474 scope.go:117] "RemoveContainer" containerID="2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757" Feb 23 13:17:13.949386 master-0 kubenswrapper[26474]: E0223 13:17:13.949354 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757\": container with ID starting with 2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757 not found: ID does not exist" containerID="2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757" Feb 23 13:17:13.949386 master-0 kubenswrapper[26474]: I0223 13:17:13.949384 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757"} err="failed to get container status \"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757\": rpc error: code = NotFound desc = could not find container \"2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757\": container with ID starting with 2c466f55700cac0e542549de901af7b12c93bf2524e86c59e0b19bb832217757 not found: ID does not exist" Feb 23 13:17:14.335860 master-0 kubenswrapper[26474]: I0223 13:17:14.335806 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:17:14.336319 master-0 kubenswrapper[26474]: I0223 13:17:14.335863 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:17:14.407388 master-0 kubenswrapper[26474]: I0223 13:17:14.407295 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb342c942d3d92fd08ed7cf68fafb94c" path="/var/lib/kubelet/pods/eb342c942d3d92fd08ed7cf68fafb94c/volumes" Feb 23 13:17:16.283143 master-0 kubenswrapper[26474]: E0223 13:17:16.283014 26474 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:16.284159 master-0 kubenswrapper[26474]: E0223 13:17:16.283907 26474 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:16.284809 master-0 kubenswrapper[26474]: E0223 13:17:16.284743 26474 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:16.285728 master-0 kubenswrapper[26474]: E0223 13:17:16.285634 26474 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:16.286947 master-0 kubenswrapper[26474]: E0223 13:17:16.286858 26474 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:16.286947 master-0 kubenswrapper[26474]: I0223 13:17:16.286942 26474 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:17:16.288272 master-0 kubenswrapper[26474]: E0223 13:17:16.288085 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 23 13:17:16.489901 master-0 kubenswrapper[26474]: E0223 13:17:16.489792 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 23 13:17:16.891754 master-0 kubenswrapper[26474]: E0223 13:17:16.891645 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 23 13:17:17.693190 master-0 kubenswrapper[26474]: E0223 13:17:17.693040 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 23 13:17:18.401944 master-0 kubenswrapper[26474]: I0223 13:17:18.401834 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:18.403001 master-0 kubenswrapper[26474]: I0223 13:17:18.402899 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:18.460682 master-0 kubenswrapper[26474]: I0223 13:17:18.460605 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-dd5fdb7d7-wf5bd" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" containerID="cri-o://8f7858f429fae1e5b86aa6190b59689851482fe73d0e2b5dcffd6f308650acaa" gracePeriod=15 Feb 23 13:17:18.824619 master-0 kubenswrapper[26474]: I0223 13:17:18.824505 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dd5fdb7d7-wf5bd_3a0b32d2-df4f-44e9-a841-b7e925783400/console/0.log" Feb 23 13:17:18.825420 master-0 kubenswrapper[26474]: I0223 13:17:18.824619 26474 generic.go:334] "Generic (PLEG): container finished" podID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerID="8f7858f429fae1e5b86aa6190b59689851482fe73d0e2b5dcffd6f308650acaa" exitCode=2 Feb 23 13:17:18.825420 master-0 kubenswrapper[26474]: I0223 13:17:18.824671 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd5fdb7d7-wf5bd" event={"ID":"3a0b32d2-df4f-44e9-a841-b7e925783400","Type":"ContainerDied","Data":"8f7858f429fae1e5b86aa6190b59689851482fe73d0e2b5dcffd6f308650acaa"} Feb 23 13:17:19.041591 master-0 kubenswrapper[26474]: I0223 13:17:19.041524 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dd5fdb7d7-wf5bd_3a0b32d2-df4f-44e9-a841-b7e925783400/console/0.log" Feb 23 13:17:19.041819 master-0 kubenswrapper[26474]: I0223 13:17:19.041615 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:17:19.043207 master-0 kubenswrapper[26474]: I0223 13:17:19.043125 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.044228 master-0 kubenswrapper[26474]: I0223 13:17:19.044136 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.045098 master-0 kubenswrapper[26474]: I0223 13:17:19.045025 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.155835 master-0 kubenswrapper[26474]: I0223 13:17:19.155762 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156015 master-0 kubenswrapper[26474]: I0223 13:17:19.155883 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9zw9\" (UniqueName: \"kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156015 master-0 kubenswrapper[26474]: I0223 13:17:19.155970 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156084 master-0 kubenswrapper[26474]: I0223 13:17:19.156065 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156131 master-0 kubenswrapper[26474]: I0223 13:17:19.156104 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156218 master-0 kubenswrapper[26474]: I0223 13:17:19.156185 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156326 master-0 kubenswrapper[26474]: I0223 13:17:19.156294 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert\") pod \"3a0b32d2-df4f-44e9-a841-b7e925783400\" (UID: \"3a0b32d2-df4f-44e9-a841-b7e925783400\") " Feb 23 13:17:19.156771 master-0 kubenswrapper[26474]: I0223 13:17:19.156721 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:19.157075 master-0 kubenswrapper[26474]: I0223 13:17:19.157012 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:19.157153 master-0 kubenswrapper[26474]: I0223 13:17:19.157123 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.157207 master-0 kubenswrapper[26474]: I0223 13:17:19.157141 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config" (OuterVolumeSpecName: "console-config") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:19.158523 master-0 kubenswrapper[26474]: I0223 13:17:19.158470 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca" (OuterVolumeSpecName: "service-ca") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:19.160696 master-0 kubenswrapper[26474]: I0223 13:17:19.160604 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9" (OuterVolumeSpecName: "kube-api-access-d9zw9") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "kube-api-access-d9zw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:19.162233 master-0 kubenswrapper[26474]: I0223 13:17:19.162162 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:19.162384 master-0 kubenswrapper[26474]: I0223 13:17:19.162268 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3a0b32d2-df4f-44e9-a841-b7e925783400" (UID: "3a0b32d2-df4f-44e9-a841-b7e925783400"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:19.259758 master-0 kubenswrapper[26474]: I0223 13:17:19.259678 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.259758 master-0 kubenswrapper[26474]: I0223 13:17:19.259756 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.259905 master-0 kubenswrapper[26474]: I0223 13:17:19.259779 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9zw9\" (UniqueName: \"kubernetes.io/projected/3a0b32d2-df4f-44e9-a841-b7e925783400-kube-api-access-d9zw9\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.259905 master-0 kubenswrapper[26474]: I0223 13:17:19.259801 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.259905 master-0 kubenswrapper[26474]: I0223 13:17:19.259820 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a0b32d2-df4f-44e9-a841-b7e925783400-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.259905 master-0 kubenswrapper[26474]: I0223 13:17:19.259837 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a0b32d2-df4f-44e9-a841-b7e925783400-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:19.294779 master-0 kubenswrapper[26474]: E0223 13:17:19.294690 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 23 13:17:19.835001 master-0 kubenswrapper[26474]: I0223 13:17:19.834927 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dd5fdb7d7-wf5bd_3a0b32d2-df4f-44e9-a841-b7e925783400/console/0.log" Feb 23 13:17:19.835001 master-0 kubenswrapper[26474]: I0223 13:17:19.835003 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd5fdb7d7-wf5bd" event={"ID":"3a0b32d2-df4f-44e9-a841-b7e925783400","Type":"ContainerDied","Data":"eab3aaf7f85b40d5ba67f3292063011f53a7bbf03aa5b976d15016d60d02103c"} Feb 23 13:17:19.836118 master-0 kubenswrapper[26474]: I0223 13:17:19.835055 26474 scope.go:117] "RemoveContainer" containerID="8f7858f429fae1e5b86aa6190b59689851482fe73d0e2b5dcffd6f308650acaa" Feb 23 13:17:19.836118 master-0 kubenswrapper[26474]: I0223 13:17:19.835117 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd5fdb7d7-wf5bd" Feb 23 13:17:19.836405 master-0 kubenswrapper[26474]: I0223 13:17:19.836274 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.837451 master-0 kubenswrapper[26474]: I0223 13:17:19.837395 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.838175 master-0 kubenswrapper[26474]: I0223 13:17:19.838106 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.860274 master-0 kubenswrapper[26474]: I0223 13:17:19.859920 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.860758 master-0 kubenswrapper[26474]: I0223 13:17:19.860680 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:19.861486 master-0 kubenswrapper[26474]: I0223 13:17:19.861419 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.296462 master-0 kubenswrapper[26474]: E0223 13:17:22.296263 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:17:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085\\\"],\\\"sizeBytes\\\":2895784037},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c2af15d278f72034eecf3db74223b7e61f3d07c1a5c7ba760e7586915ff1b17e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e166d1252d7455d8bd62e43f2967e738ee9bdd6a09b7771a4187d82477ae7535\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1237042376},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5385b46d054c9ee73478bf23e07056d0b9f81d34619d0949927d8d9e791fcb5\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ebdc10b149ba97b999770285d06149ef92c780205d916c3cab994098e20be0ba\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210455233},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657\\\"],\\\"sizeBytes\\\":633766177},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92\\\"],\\\"sizeBytes\\\":605597321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b\\\"],\\\"sizeBytes\\\":512134379},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1\\\"],\\\"sizeBytes\\\":502604403},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.297101 master-0 kubenswrapper[26474]: E0223 13:17:22.297068 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.297736 master-0 kubenswrapper[26474]: E0223 13:17:22.297703 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.298653 master-0 kubenswrapper[26474]: E0223 13:17:22.298593 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.299294 master-0 kubenswrapper[26474]: E0223 13:17:22.299257 26474 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:22.299369 master-0 kubenswrapper[26474]: E0223 13:17:22.299293 26474 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:17:22.495602 master-0 kubenswrapper[26474]: E0223 13:17:22.495490 26474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 23 13:17:22.703029 master-0 kubenswrapper[26474]: E0223 13:17:22.702752 26474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e294a5d5490c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:2146f0e3671998cad8bbc2464b009ab7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,LastTimestamp:2026-02-23 13:17:11.482124556 +0000 UTC m=+153.328632243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:17:23.769783 master-0 kubenswrapper[26474]: I0223 13:17:23.769695 26474 patch_prober.go:28] interesting pod/console-945f67446-9jmmn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 23 13:17:23.769783 master-0 kubenswrapper[26474]: I0223 13:17:23.769763 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 23 13:17:24.336081 master-0 kubenswrapper[26474]: I0223 13:17:24.335998 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:17:24.336081 master-0 kubenswrapper[26474]: I0223 13:17:24.336070 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:17:24.880205 master-0 kubenswrapper[26474]: I0223 13:17:24.880120 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/1.log" Feb 23 13:17:24.882930 master-0 kubenswrapper[26474]: I0223 13:17:24.882874 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/0.log" Feb 23 13:17:24.883021 master-0 kubenswrapper[26474]: I0223 13:17:24.882947 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71" exitCode=1 Feb 23 13:17:24.883021 master-0 kubenswrapper[26474]: I0223 13:17:24.882987 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerDied","Data":"e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71"} Feb 23 13:17:24.883109 master-0 kubenswrapper[26474]: I0223 13:17:24.883029 26474 scope.go:117] "RemoveContainer" containerID="2b446631b7c4b5d92cf97ec52481c989bc08ce81af54c3a3ae206d553095556b" Feb 23 13:17:24.884577 master-0 kubenswrapper[26474]: I0223 13:17:24.884473 26474 scope.go:117] "RemoveContainer" containerID="e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71" Feb 23 13:17:24.885717 master-0 kubenswrapper[26474]: I0223 13:17:24.885571 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:24.887132 master-0 kubenswrapper[26474]: I0223 13:17:24.887050 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:24.888561 master-0 kubenswrapper[26474]: I0223 13:17:24.888506 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:24.889527 master-0 kubenswrapper[26474]: I0223 13:17:24.889395 26474 status_manager.go:851] "Failed to get status for pod" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.393590 master-0 kubenswrapper[26474]: I0223 13:17:25.393414 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:25.395910 master-0 kubenswrapper[26474]: I0223 13:17:25.395814 26474 status_manager.go:851] "Failed to get status for pod" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.396799 master-0 kubenswrapper[26474]: I0223 13:17:25.396739 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.403682 master-0 kubenswrapper[26474]: I0223 13:17:25.403611 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.404671 master-0 kubenswrapper[26474]: I0223 13:17:25.404610 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.432526 master-0 kubenswrapper[26474]: I0223 13:17:25.432441 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:25.432526 master-0 kubenswrapper[26474]: I0223 13:17:25.432513 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:25.433584 master-0 kubenswrapper[26474]: E0223 13:17:25.433520 26474 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:25.434914 master-0 kubenswrapper[26474]: I0223 13:17:25.434855 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:25.462257 master-0 kubenswrapper[26474]: W0223 13:17:25.462145 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487622064474ed0ec70f7bf2a0fcb80b.slice/crio-093aedee2b3e0a0039fcd51fd5e168eaf26880c4d12837bddbebcaf0dc75ad0a WatchSource:0}: Error finding container 093aedee2b3e0a0039fcd51fd5e168eaf26880c4d12837bddbebcaf0dc75ad0a: Status 404 returned error can't find the container with id 093aedee2b3e0a0039fcd51fd5e168eaf26880c4d12837bddbebcaf0dc75ad0a Feb 23 13:17:25.898694 master-0 kubenswrapper[26474]: I0223 13:17:25.898591 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/1.log" Feb 23 13:17:25.900218 master-0 kubenswrapper[26474]: I0223 13:17:25.900148 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5b3e061f9d09dab5dbaef15b3f1e67a0","Type":"ContainerStarted","Data":"2e3d12f7546ed9dc911e6b0badc88fa73138850feb384e2188c5098c9007f1a4"} Feb 23 13:17:25.902034 master-0 kubenswrapper[26474]: I0223 13:17:25.901953 26474 status_manager.go:851] "Failed to get status for pod" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.903249 master-0 kubenswrapper[26474]: I0223 13:17:25.903127 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.904311 master-0 kubenswrapper[26474]: I0223 13:17:25.904243 26474 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="8239ba0e55758bed3e925523ca0acc6125e2a43bba075b5ceeddc5e83568a779" exitCode=0 Feb 23 13:17:25.904311 master-0 kubenswrapper[26474]: I0223 13:17:25.904213 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.904973 master-0 kubenswrapper[26474]: I0223 13:17:25.904329 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerDied","Data":"8239ba0e55758bed3e925523ca0acc6125e2a43bba075b5ceeddc5e83568a779"} Feb 23 13:17:25.904973 master-0 kubenswrapper[26474]: I0223 13:17:25.904457 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"093aedee2b3e0a0039fcd51fd5e168eaf26880c4d12837bddbebcaf0dc75ad0a"} Feb 23 13:17:25.905128 master-0 kubenswrapper[26474]: I0223 13:17:25.905080 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:25.905181 master-0 kubenswrapper[26474]: I0223 13:17:25.905129 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:25.905514 master-0 kubenswrapper[26474]: I0223 13:17:25.905463 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.906158 master-0 kubenswrapper[26474]: I0223 13:17:25.906107 26474 status_manager.go:851] "Failed to get status for pod" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.906225 master-0 kubenswrapper[26474]: E0223 13:17:25.906170 26474 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:25.907135 master-0 kubenswrapper[26474]: I0223 13:17:25.907050 26474 status_manager.go:851] "Failed to get status for pod" podUID="2146f0e3671998cad8bbc2464b009ab7" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.908176 master-0 kubenswrapper[26474]: I0223 13:17:25.907855 26474 status_manager.go:851] "Failed to get status for pod" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" pod="openshift-console/console-dd5fdb7d7-wf5bd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-dd5fdb7d7-wf5bd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:25.908689 master-0 kubenswrapper[26474]: I0223 13:17:25.908639 26474 status_manager.go:851] "Failed to get status for pod" podUID="75f01779-caef-46f3-ac91-89f32798535b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:17:26.915171 master-0 kubenswrapper[26474]: I0223 13:17:26.914660 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"9a5dda0e4b9e477ad64ad04cf8c9da790d7b3d8943ae5c91cd2cf9106954e73b"} Feb 23 13:17:26.915171 master-0 kubenswrapper[26474]: I0223 13:17:26.914715 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"6ac07798606cb0067b10211e7e0c10c8d448b7b08a2080e71bbb7fb995e08998"} Feb 23 13:17:26.915171 master-0 kubenswrapper[26474]: I0223 13:17:26.914728 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"9659e488cda1f47542b3fcbf0e2b38b335129ee540fdeb5905f3af5eb2d1b4b2"} Feb 23 13:17:27.597872 master-0 kubenswrapper[26474]: I0223 13:17:27.597230 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:17:27.607872 master-0 kubenswrapper[26474]: I0223 13:17:27.607808 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-79f8868b4-qms96" Feb 23 13:17:27.929460 master-0 kubenswrapper[26474]: I0223 13:17:27.929240 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"c6d0cdec49614b8667527a74e80da40d3866d5b7bc0ebbf6bb12c015bebfb1f6"} Feb 23 13:17:27.929460 master-0 kubenswrapper[26474]: I0223 13:17:27.929401 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"5b94b88abae57b03bcdfabbec778580967630cb856cc19740c157134beb53b9e"} Feb 23 13:17:27.930203 master-0 kubenswrapper[26474]: I0223 13:17:27.929485 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:27.930203 master-0 kubenswrapper[26474]: I0223 13:17:27.929604 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:27.930203 master-0 kubenswrapper[26474]: I0223 13:17:27.929637 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:28.223032 master-0 kubenswrapper[26474]: I0223 13:17:28.222787 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:28.228825 master-0 kubenswrapper[26474]: I0223 13:17:28.228764 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:28.938766 master-0 kubenswrapper[26474]: I0223 13:17:28.938633 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:30.438828 master-0 kubenswrapper[26474]: I0223 13:17:30.438720 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:30.438828 master-0 kubenswrapper[26474]: I0223 13:17:30.438835 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:30.452630 master-0 kubenswrapper[26474]: I0223 13:17:30.452570 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:32.947011 master-0 kubenswrapper[26474]: I0223 13:17:32.946947 26474 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:33.770308 master-0 kubenswrapper[26474]: I0223 13:17:33.770230 26474 patch_prober.go:28] interesting pod/console-945f67446-9jmmn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 23 13:17:33.770308 master-0 kubenswrapper[26474]: I0223 13:17:33.770303 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 23 13:17:33.984775 master-0 kubenswrapper[26474]: I0223 13:17:33.984682 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:33.984775 master-0 kubenswrapper[26474]: I0223 13:17:33.984774 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:33.991687 master-0 kubenswrapper[26474]: I0223 13:17:33.991156 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:33.994438 master-0 kubenswrapper[26474]: I0223 13:17:33.994381 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="82574a22-d16f-479b-a082-f6152e08c8b4" Feb 23 13:17:34.337220 master-0 kubenswrapper[26474]: I0223 13:17:34.337104 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:17:34.337708 master-0 kubenswrapper[26474]: I0223 13:17:34.337247 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:17:34.995030 master-0 kubenswrapper[26474]: I0223 13:17:34.994942 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:34.995030 master-0 kubenswrapper[26474]: I0223 13:17:34.994989 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="84d7c9c6-c449-40bb-8c3e-d028d18b5c70" Feb 23 13:17:38.421697 master-0 kubenswrapper[26474]: I0223 13:17:38.421590 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="82574a22-d16f-479b-a082-f6152e08c8b4" Feb 23 13:17:38.663463 master-0 kubenswrapper[26474]: I0223 13:17:38.663306 26474 scope.go:117] "RemoveContainer" containerID="e83f60b44b83cfd6e3f9aea87eba10757c2f61020bb495edff5a188472446875" Feb 23 13:17:38.692884 master-0 kubenswrapper[26474]: I0223 13:17:38.692809 26474 scope.go:117] "RemoveContainer" containerID="d30622693465b0b62d620607efa00658fed43c117d15217ddcd12f4e9ddc2419" Feb 23 13:17:39.855403 master-0 kubenswrapper[26474]: I0223 13:17:39.855310 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:42.225075 master-0 kubenswrapper[26474]: I0223 13:17:42.225006 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:17:42.399100 master-0 kubenswrapper[26474]: I0223 13:17:42.399019 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:17:42.434559 master-0 kubenswrapper[26474]: I0223 13:17:42.434458 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:17:42.548689 master-0 kubenswrapper[26474]: I0223 13:17:42.548618 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:17:42.583850 master-0 kubenswrapper[26474]: I0223 13:17:42.583771 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:17:43.205374 master-0 kubenswrapper[26474]: I0223 13:17:43.205296 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 13:17:43.536190 master-0 kubenswrapper[26474]: I0223 13:17:43.536096 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:17:43.589110 master-0 kubenswrapper[26474]: I0223 13:17:43.589060 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:17:43.619481 master-0 kubenswrapper[26474]: I0223 13:17:43.619001 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:17:43.771872 master-0 kubenswrapper[26474]: I0223 13:17:43.771772 26474 patch_prober.go:28] interesting pod/console-945f67446-9jmmn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 23 13:17:43.772120 master-0 kubenswrapper[26474]: I0223 13:17:43.771899 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 23 13:17:43.841623 master-0 kubenswrapper[26474]: I0223 13:17:43.841441 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:17:44.118813 master-0 kubenswrapper[26474]: I0223 13:17:44.118687 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:17:44.142455 master-0 kubenswrapper[26474]: I0223 13:17:44.142312 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:17:44.222258 master-0 kubenswrapper[26474]: I0223 13:17:44.222154 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:17:44.336685 master-0 kubenswrapper[26474]: I0223 13:17:44.336593 26474 patch_prober.go:28] interesting pod/console-7878b5757c-w9bdq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Feb 23 13:17:44.336943 master-0 kubenswrapper[26474]: I0223 13:17:44.336720 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Feb 23 13:17:44.351680 master-0 kubenswrapper[26474]: I0223 13:17:44.351593 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:17:44.411406 master-0 kubenswrapper[26474]: I0223 13:17:44.411206 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:17:44.460518 master-0 kubenswrapper[26474]: I0223 13:17:44.460329 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:17:44.673317 master-0 kubenswrapper[26474]: I0223 13:17:44.672979 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:17:44.695119 master-0 kubenswrapper[26474]: I0223 13:17:44.695066 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:17:44.715947 master-0 kubenswrapper[26474]: I0223 13:17:44.715836 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:17:44.760035 master-0 kubenswrapper[26474]: I0223 13:17:44.759992 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:17:44.784373 master-0 kubenswrapper[26474]: I0223 13:17:44.784296 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:17:44.816805 master-0 kubenswrapper[26474]: I0223 13:17:44.816769 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:17:44.894007 master-0 kubenswrapper[26474]: I0223 13:17:44.893945 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:17:44.907168 master-0 kubenswrapper[26474]: I0223 13:17:44.907112 26474 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:44.981174 master-0 kubenswrapper[26474]: I0223 13:17:44.981050 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:17:44.987268 master-0 kubenswrapper[26474]: I0223 13:17:44.987225 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 23 13:17:45.109714 master-0 kubenswrapper[26474]: I0223 13:17:45.109671 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:17:45.227691 master-0 kubenswrapper[26474]: I0223 13:17:45.227635 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9ppv8" Feb 23 13:17:45.441490 master-0 kubenswrapper[26474]: I0223 13:17:45.441435 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:17:45.459273 master-0 kubenswrapper[26474]: I0223 13:17:45.459194 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q7zn4" Feb 23 13:17:45.483982 master-0 kubenswrapper[26474]: I0223 13:17:45.483928 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:17:45.488291 master-0 kubenswrapper[26474]: I0223 13:17:45.488255 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:17:45.643699 master-0 kubenswrapper[26474]: I0223 13:17:45.643570 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:17:45.646671 master-0 kubenswrapper[26474]: I0223 13:17:45.646587 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9una895oaglcl" Feb 23 13:17:45.701800 master-0 kubenswrapper[26474]: I0223 13:17:45.701647 26474 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:45.841148 master-0 kubenswrapper[26474]: I0223 13:17:45.840974 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:17:45.843578 master-0 kubenswrapper[26474]: I0223 13:17:45.843535 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:17:45.845989 master-0 kubenswrapper[26474]: I0223 13:17:45.845939 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-4zf94" Feb 23 13:17:45.875785 master-0 kubenswrapper[26474]: I0223 13:17:45.875705 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xfzk8" Feb 23 13:17:46.002789 master-0 kubenswrapper[26474]: I0223 13:17:46.002666 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:17:46.019884 master-0 kubenswrapper[26474]: I0223 13:17:46.019813 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:17:46.043532 master-0 kubenswrapper[26474]: I0223 13:17:46.043477 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:17:46.053046 master-0 kubenswrapper[26474]: I0223 13:17:46.052987 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-v42tl" Feb 23 13:17:46.114420 master-0 kubenswrapper[26474]: I0223 13:17:46.114371 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:17:46.121216 master-0 kubenswrapper[26474]: I0223 13:17:46.121183 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:17:46.131907 master-0 kubenswrapper[26474]: I0223 13:17:46.131868 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:17:46.377106 master-0 kubenswrapper[26474]: I0223 13:17:46.377013 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:17:46.380714 master-0 kubenswrapper[26474]: I0223 13:17:46.380639 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:17:46.506459 master-0 kubenswrapper[26474]: I0223 13:17:46.506370 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:17:46.657870 master-0 kubenswrapper[26474]: I0223 13:17:46.657641 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:17:46.661670 master-0 kubenswrapper[26474]: I0223 13:17:46.661603 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:17:46.704036 master-0 kubenswrapper[26474]: I0223 13:17:46.703917 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:17:46.782766 master-0 kubenswrapper[26474]: I0223 13:17:46.782681 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:17:46.952634 master-0 kubenswrapper[26474]: I0223 13:17:46.952447 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:17:46.963562 master-0 kubenswrapper[26474]: I0223 13:17:46.963499 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:17:47.101803 master-0 kubenswrapper[26474]: I0223 13:17:47.101703 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:17:47.103355 master-0 kubenswrapper[26474]: I0223 13:17:47.103303 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxgbj" Feb 23 13:17:47.194482 master-0 kubenswrapper[26474]: I0223 13:17:47.194388 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:17:47.232752 master-0 kubenswrapper[26474]: I0223 13:17:47.232547 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:17:47.256904 master-0 kubenswrapper[26474]: I0223 13:17:47.256842 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:17:47.258837 master-0 kubenswrapper[26474]: I0223 13:17:47.258765 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:17:47.281980 master-0 kubenswrapper[26474]: I0223 13:17:47.281888 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 13:17:47.290411 master-0 kubenswrapper[26474]: I0223 13:17:47.290320 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:17:47.354283 master-0 kubenswrapper[26474]: I0223 13:17:47.354202 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:17:47.381093 master-0 kubenswrapper[26474]: I0223 13:17:47.380989 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:17:47.388934 master-0 kubenswrapper[26474]: I0223 13:17:47.388881 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:17:47.396433 master-0 kubenswrapper[26474]: I0223 13:17:47.396332 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:17:47.399863 master-0 kubenswrapper[26474]: I0223 13:17:47.399821 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:17:47.405124 master-0 kubenswrapper[26474]: I0223 13:17:47.405063 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:17:47.409560 master-0 kubenswrapper[26474]: I0223 13:17:47.408914 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 13:17:47.444367 master-0 kubenswrapper[26474]: I0223 13:17:47.444273 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:17:47.467866 master-0 kubenswrapper[26474]: I0223 13:17:47.467801 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:17:47.477373 master-0 kubenswrapper[26474]: I0223 13:17:47.477305 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:17:47.509441 master-0 kubenswrapper[26474]: I0223 13:17:47.508986 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zwkpp" Feb 23 13:17:47.511275 master-0 kubenswrapper[26474]: I0223 13:17:47.511026 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:17:47.588748 master-0 kubenswrapper[26474]: I0223 13:17:47.588680 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:17:47.676269 master-0 kubenswrapper[26474]: I0223 13:17:47.676179 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:17:47.727885 master-0 kubenswrapper[26474]: I0223 13:17:47.727816 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 23 13:17:47.731784 master-0 kubenswrapper[26474]: I0223 13:17:47.731721 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:17:47.747315 master-0 kubenswrapper[26474]: I0223 13:17:47.747258 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:17:47.756580 master-0 kubenswrapper[26474]: I0223 13:17:47.756503 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nlmwg" Feb 23 13:17:47.770019 master-0 kubenswrapper[26474]: I0223 13:17:47.769890 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:17:47.784681 master-0 kubenswrapper[26474]: I0223 13:17:47.784629 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 23 13:17:47.831453 master-0 kubenswrapper[26474]: I0223 13:17:47.831379 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 13:17:47.842183 master-0 kubenswrapper[26474]: I0223 13:17:47.842120 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:17:47.943627 master-0 kubenswrapper[26474]: I0223 13:17:47.943579 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:17:47.968959 master-0 kubenswrapper[26474]: I0223 13:17:47.968903 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:17:48.025247 master-0 kubenswrapper[26474]: I0223 13:17:48.005878 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8d884" Feb 23 13:17:48.044439 master-0 kubenswrapper[26474]: I0223 13:17:48.044359 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 13:17:48.121543 master-0 kubenswrapper[26474]: I0223 13:17:48.121485 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:17:48.122818 master-0 kubenswrapper[26474]: I0223 13:17:48.122773 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:17:48.208662 master-0 kubenswrapper[26474]: I0223 13:17:48.208547 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:17:48.236995 master-0 kubenswrapper[26474]: I0223 13:17:48.236928 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:17:48.307559 master-0 kubenswrapper[26474]: I0223 13:17:48.307410 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:17:48.412868 master-0 kubenswrapper[26474]: I0223 13:17:48.412789 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:17:48.435626 master-0 kubenswrapper[26474]: I0223 13:17:48.435521 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:17:48.467707 master-0 kubenswrapper[26474]: I0223 13:17:48.467618 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:17:48.512982 master-0 kubenswrapper[26474]: I0223 13:17:48.512870 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:17:48.592126 master-0 kubenswrapper[26474]: I0223 13:17:48.591903 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:17:48.638440 master-0 kubenswrapper[26474]: I0223 13:17:48.638296 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:17:48.743241 master-0 kubenswrapper[26474]: I0223 13:17:48.743140 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:17:48.775944 master-0 kubenswrapper[26474]: I0223 13:17:48.775842 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:17:48.790751 master-0 kubenswrapper[26474]: I0223 13:17:48.790672 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:17:48.815202 master-0 kubenswrapper[26474]: I0223 13:17:48.815114 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:17:48.827299 master-0 kubenswrapper[26474]: I0223 13:17:48.827245 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6tkzc" Feb 23 13:17:48.834641 master-0 kubenswrapper[26474]: I0223 13:17:48.834600 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:17:48.866554 master-0 kubenswrapper[26474]: I0223 13:17:48.866304 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:17:48.878590 master-0 kubenswrapper[26474]: I0223 13:17:48.878509 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:17:48.893052 master-0 kubenswrapper[26474]: I0223 13:17:48.892990 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:17:49.000554 master-0 kubenswrapper[26474]: I0223 13:17:49.000453 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:17:49.037140 master-0 kubenswrapper[26474]: I0223 13:17:49.036983 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:17:49.062718 master-0 kubenswrapper[26474]: I0223 13:17:49.062654 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4whws" Feb 23 13:17:49.127146 master-0 kubenswrapper[26474]: I0223 13:17:49.127010 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:17:49.150956 master-0 kubenswrapper[26474]: I0223 13:17:49.150888 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:17:49.162776 master-0 kubenswrapper[26474]: I0223 13:17:49.162656 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:17:49.173317 master-0 kubenswrapper[26474]: I0223 13:17:49.173264 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:17:49.180927 master-0 kubenswrapper[26474]: I0223 13:17:49.180872 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:17:49.189135 master-0 kubenswrapper[26474]: I0223 13:17:49.189059 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:17:49.237766 master-0 kubenswrapper[26474]: I0223 13:17:49.237635 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vqpkz" Feb 23 13:17:49.306560 master-0 kubenswrapper[26474]: I0223 13:17:49.306494 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:17:49.393574 master-0 kubenswrapper[26474]: I0223 13:17:49.393448 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-2hz68" Feb 23 13:17:49.670190 master-0 kubenswrapper[26474]: I0223 13:17:49.670016 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:17:49.673319 master-0 kubenswrapper[26474]: I0223 13:17:49.673258 26474 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:49.707562 master-0 kubenswrapper[26474]: I0223 13:17:49.707155 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:17:49.754907 master-0 kubenswrapper[26474]: I0223 13:17:49.754815 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:17:49.785269 master-0 kubenswrapper[26474]: I0223 13:17:49.784819 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:17:49.905239 master-0 kubenswrapper[26474]: I0223 13:17:49.905173 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:17:49.926016 master-0 kubenswrapper[26474]: I0223 13:17:49.925876 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:17:50.005709 master-0 kubenswrapper[26474]: I0223 13:17:50.005635 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:17:50.079311 master-0 kubenswrapper[26474]: I0223 13:17:50.079246 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:17:50.113636 master-0 kubenswrapper[26474]: I0223 13:17:50.113529 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:17:50.137145 master-0 kubenswrapper[26474]: I0223 13:17:50.137068 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:17:50.141872 master-0 kubenswrapper[26474]: I0223 13:17:50.141760 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:17:50.185554 master-0 kubenswrapper[26474]: I0223 13:17:50.185387 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:17:50.189155 master-0 kubenswrapper[26474]: I0223 13:17:50.189087 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:17:50.251541 master-0 kubenswrapper[26474]: I0223 13:17:50.248384 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-f8dtp" Feb 23 13:17:50.270523 master-0 kubenswrapper[26474]: I0223 13:17:50.270435 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-gm6kx" Feb 23 13:17:50.292082 master-0 kubenswrapper[26474]: I0223 13:17:50.292012 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:17:50.349214 master-0 kubenswrapper[26474]: I0223 13:17:50.349112 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 23 13:17:50.349789 master-0 kubenswrapper[26474]: I0223 13:17:50.349720 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:17:50.404752 master-0 kubenswrapper[26474]: I0223 13:17:50.404657 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-qqvc9" Feb 23 13:17:50.409984 master-0 kubenswrapper[26474]: I0223 13:17:50.409917 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:17:50.410403 master-0 kubenswrapper[26474]: I0223 13:17:50.410326 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:17:50.431379 master-0 kubenswrapper[26474]: I0223 13:17:50.431276 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 23 13:17:50.566287 master-0 kubenswrapper[26474]: I0223 13:17:50.566206 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:17:50.572586 master-0 kubenswrapper[26474]: I0223 13:17:50.572534 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nhgb8" Feb 23 13:17:50.574983 master-0 kubenswrapper[26474]: I0223 13:17:50.574919 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:17:50.641835 master-0 kubenswrapper[26474]: I0223 13:17:50.641754 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-p9nh5" Feb 23 13:17:50.643940 master-0 kubenswrapper[26474]: I0223 13:17:50.643859 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:17:50.718612 master-0 kubenswrapper[26474]: I0223 13:17:50.718509 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 13:17:50.718612 master-0 kubenswrapper[26474]: I0223 13:17:50.718517 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:17:50.719623 master-0 kubenswrapper[26474]: I0223 13:17:50.719571 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-69tp4ba64sllc" Feb 23 13:17:50.719752 master-0 kubenswrapper[26474]: I0223 13:17:50.719639 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:17:50.762928 master-0 kubenswrapper[26474]: I0223 13:17:50.762846 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:17:50.775260 master-0 kubenswrapper[26474]: I0223 13:17:50.775148 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:17:50.780301 master-0 kubenswrapper[26474]: I0223 13:17:50.780231 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:17:50.960476 master-0 kubenswrapper[26474]: I0223 13:17:50.960280 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:17:50.973975 master-0 kubenswrapper[26474]: I0223 13:17:50.973889 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:17:50.983844 master-0 kubenswrapper[26474]: I0223 13:17:50.983763 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:17:51.014448 master-0 kubenswrapper[26474]: I0223 13:17:51.014299 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:17:51.026611 master-0 kubenswrapper[26474]: I0223 13:17:51.026521 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:17:51.028287 master-0 kubenswrapper[26474]: I0223 13:17:51.028203 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:17:51.063672 master-0 kubenswrapper[26474]: I0223 13:17:51.063534 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 23 13:17:51.068547 master-0 kubenswrapper[26474]: I0223 13:17:51.064894 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:17:51.119716 master-0 kubenswrapper[26474]: I0223 13:17:51.119574 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:17:51.126214 master-0 kubenswrapper[26474]: I0223 13:17:51.126128 26474 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:17:51.131378 master-0 kubenswrapper[26474]: I0223 13:17:51.131189 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=40.131158327 podStartE2EDuration="40.131158327s" podCreationTimestamp="2026-02-23 13:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:32.665355264 +0000 UTC m=+174.511862941" watchObservedRunningTime="2026-02-23 13:17:51.131158327 +0000 UTC m=+192.977666044" Feb 23 13:17:51.148517 master-0 kubenswrapper[26474]: I0223 13:17:51.146484 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-dd5fdb7d7-wf5bd","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:17:51.148517 master-0 kubenswrapper[26474]: I0223 13:17:51.146589 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:17:51.193678 master-0 kubenswrapper[26474]: I0223 13:17:51.193545 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=19.19352082 podStartE2EDuration="19.19352082s" podCreationTimestamp="2026-02-23 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:51.176669669 +0000 UTC m=+193.023177386" watchObservedRunningTime="2026-02-23 13:17:51.19352082 +0000 UTC m=+193.040028527" Feb 23 13:17:51.197594 master-0 kubenswrapper[26474]: I0223 13:17:51.197544 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:17:51.283984 master-0 kubenswrapper[26474]: I0223 13:17:51.283888 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:17:51.343702 master-0 kubenswrapper[26474]: I0223 13:17:51.342212 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:17:51.408269 master-0 kubenswrapper[26474]: I0223 13:17:51.408152 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:17:51.469039 master-0 kubenswrapper[26474]: I0223 13:17:51.468801 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:17:51.541671 master-0 kubenswrapper[26474]: I0223 13:17:51.541549 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 13:17:51.655997 master-0 kubenswrapper[26474]: I0223 13:17:51.655921 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-dsztp" Feb 23 13:17:51.662159 master-0 kubenswrapper[26474]: I0223 13:17:51.662112 26474 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:51.720978 master-0 kubenswrapper[26474]: I0223 13:17:51.720889 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:17:51.765284 master-0 kubenswrapper[26474]: I0223 13:17:51.765183 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:17:51.877713 master-0 kubenswrapper[26474]: I0223 13:17:51.877517 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:17:51.910829 master-0 kubenswrapper[26474]: I0223 13:17:51.910715 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:17:51.911085 master-0 kubenswrapper[26474]: I0223 13:17:51.910836 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 23 13:17:51.963805 master-0 kubenswrapper[26474]: I0223 13:17:51.963699 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:17:52.003429 master-0 kubenswrapper[26474]: I0223 13:17:52.003296 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:17:52.013826 master-0 kubenswrapper[26474]: I0223 13:17:52.013773 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-46ht7" Feb 23 13:17:52.054513 master-0 kubenswrapper[26474]: I0223 13:17:52.054405 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:17:52.056892 master-0 kubenswrapper[26474]: I0223 13:17:52.056809 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:17:52.180226 master-0 kubenswrapper[26474]: I0223 13:17:52.180047 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:17:52.252248 master-0 kubenswrapper[26474]: I0223 13:17:52.252142 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:17:52.274260 master-0 kubenswrapper[26474]: I0223 13:17:52.274156 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:17:52.285240 master-0 kubenswrapper[26474]: I0223 13:17:52.285100 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:17:52.334258 master-0 kubenswrapper[26474]: I0223 13:17:52.334173 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:17:52.405641 master-0 kubenswrapper[26474]: I0223 13:17:52.405385 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" path="/var/lib/kubelet/pods/3a0b32d2-df4f-44e9-a841-b7e925783400/volumes" Feb 23 13:17:52.460407 master-0 kubenswrapper[26474]: I0223 13:17:52.460185 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:17:52.497917 master-0 kubenswrapper[26474]: I0223 13:17:52.497628 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:17:52.508518 master-0 kubenswrapper[26474]: I0223 13:17:52.508421 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:17:52.511282 master-0 kubenswrapper[26474]: I0223 13:17:52.511189 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:17:52.552468 master-0 kubenswrapper[26474]: I0223 13:17:52.552326 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:17:52.557018 master-0 kubenswrapper[26474]: I0223 13:17:52.556930 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:17:52.571354 master-0 kubenswrapper[26474]: I0223 13:17:52.571214 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:17:52.584039 master-0 kubenswrapper[26474]: I0223 13:17:52.583929 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:17:52.584573 master-0 kubenswrapper[26474]: I0223 13:17:52.584514 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:17:52.668280 master-0 kubenswrapper[26474]: I0223 13:17:52.668223 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:17:52.794022 master-0 kubenswrapper[26474]: I0223 13:17:52.793930 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:17:52.806250 master-0 kubenswrapper[26474]: I0223 13:17:52.806165 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:17:52.874870 master-0 kubenswrapper[26474]: I0223 13:17:52.874813 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:17:52.941617 master-0 kubenswrapper[26474]: I0223 13:17:52.941568 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 13:17:53.041650 master-0 kubenswrapper[26474]: I0223 13:17:53.041569 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 23 13:17:53.047416 master-0 kubenswrapper[26474]: I0223 13:17:53.047289 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:17:53.091185 master-0 kubenswrapper[26474]: I0223 13:17:53.091104 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:17:53.105544 master-0 kubenswrapper[26474]: I0223 13:17:53.105417 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:17:53.116876 master-0 kubenswrapper[26474]: I0223 13:17:53.116824 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:17:53.207717 master-0 kubenswrapper[26474]: I0223 13:17:53.207651 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 13:17:53.238173 master-0 kubenswrapper[26474]: I0223 13:17:53.238102 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:17:53.279617 master-0 kubenswrapper[26474]: I0223 13:17:53.279541 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:17:53.305231 master-0 kubenswrapper[26474]: I0223 13:17:53.305095 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:17:53.348414 master-0 kubenswrapper[26474]: I0223 13:17:53.348271 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:17:53.370529 master-0 kubenswrapper[26474]: I0223 13:17:53.370469 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nhhd2" Feb 23 13:17:53.418062 master-0 kubenswrapper[26474]: I0223 13:17:53.417981 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:17:53.422841 master-0 kubenswrapper[26474]: I0223 13:17:53.422790 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:17:53.430832 master-0 kubenswrapper[26474]: I0223 13:17:53.430799 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:17:53.444162 master-0 kubenswrapper[26474]: I0223 13:17:53.444120 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:17:53.540308 master-0 kubenswrapper[26474]: I0223 13:17:53.540232 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:17:53.550057 master-0 kubenswrapper[26474]: I0223 13:17:53.550019 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:17:53.609405 master-0 kubenswrapper[26474]: I0223 13:17:53.609178 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-kbbcm" Feb 23 13:17:53.620064 master-0 kubenswrapper[26474]: I0223 13:17:53.619957 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:17:53.640745 master-0 kubenswrapper[26474]: I0223 13:17:53.640661 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:17:53.671691 master-0 kubenswrapper[26474]: I0223 13:17:53.671565 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:17:53.702266 master-0 kubenswrapper[26474]: I0223 13:17:53.702176 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:17:53.747431 master-0 kubenswrapper[26474]: I0223 13:17:53.745107 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:17:53.773725 master-0 kubenswrapper[26474]: I0223 13:17:53.773640 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:17:53.778390 master-0 kubenswrapper[26474]: I0223 13:17:53.778297 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:17:53.810601 master-0 kubenswrapper[26474]: I0223 13:17:53.810518 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:17:53.837597 master-0 kubenswrapper[26474]: I0223 13:17:53.837513 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:17:53.899513 master-0 kubenswrapper[26474]: I0223 13:17:53.899029 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:17:53.952752 master-0 kubenswrapper[26474]: I0223 13:17:53.952658 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:17:53.984108 master-0 kubenswrapper[26474]: I0223 13:17:53.984003 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:17:53.984296 master-0 kubenswrapper[26474]: I0223 13:17:53.984005 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jh64m" Feb 23 13:17:53.987245 master-0 kubenswrapper[26474]: I0223 13:17:53.987206 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:17:53.995160 master-0 kubenswrapper[26474]: I0223 13:17:53.995131 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:17:53.996932 master-0 kubenswrapper[26474]: I0223 13:17:53.996850 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:17:54.000290 master-0 kubenswrapper[26474]: I0223 13:17:54.000268 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8rwdc" Feb 23 13:17:54.051054 master-0 kubenswrapper[26474]: I0223 13:17:54.050962 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:17:54.094945 master-0 kubenswrapper[26474]: I0223 13:17:54.094871 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:17:54.110107 master-0 kubenswrapper[26474]: I0223 13:17:54.110048 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:17:54.115609 master-0 kubenswrapper[26474]: I0223 13:17:54.115568 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:17:54.125314 master-0 kubenswrapper[26474]: I0223 13:17:54.125270 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:17:54.137525 master-0 kubenswrapper[26474]: I0223 13:17:54.137399 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:17:54.156374 master-0 kubenswrapper[26474]: I0223 13:17:54.156261 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:17:54.158592 master-0 kubenswrapper[26474]: I0223 13:17:54.158285 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:17:54.205305 master-0 kubenswrapper[26474]: I0223 13:17:54.205187 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:17:54.225587 master-0 kubenswrapper[26474]: I0223 13:17:54.225525 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 23 13:17:54.275050 master-0 kubenswrapper[26474]: I0223 13:17:54.274986 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 23 13:17:54.327454 master-0 kubenswrapper[26474]: I0223 13:17:54.327386 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-t7xq8" Feb 23 13:17:54.352494 master-0 kubenswrapper[26474]: I0223 13:17:54.352428 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:17:54.450128 master-0 kubenswrapper[26474]: I0223 13:17:54.450000 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:17:54.454786 master-0 kubenswrapper[26474]: I0223 13:17:54.454746 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:17:54.478356 master-0 kubenswrapper[26474]: I0223 13:17:54.478289 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:17:54.572896 master-0 kubenswrapper[26474]: I0223 13:17:54.572816 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dndpz" Feb 23 13:17:54.609887 master-0 kubenswrapper[26474]: I0223 13:17:54.609786 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:17:54.653102 master-0 kubenswrapper[26474]: I0223 13:17:54.653011 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:17:54.674192 master-0 kubenswrapper[26474]: I0223 13:17:54.674114 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:17:54.692721 master-0 kubenswrapper[26474]: I0223 13:17:54.692601 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:17:54.693583 master-0 kubenswrapper[26474]: I0223 13:17:54.693520 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-639sbo1a4as7e" Feb 23 13:17:54.695385 master-0 kubenswrapper[26474]: I0223 13:17:54.695271 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:17:54.808952 master-0 kubenswrapper[26474]: I0223 13:17:54.808866 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:17:54.929460 master-0 kubenswrapper[26474]: I0223 13:17:54.929376 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:17:54.961649 master-0 kubenswrapper[26474]: I0223 13:17:54.961570 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:17:55.046651 master-0 kubenswrapper[26474]: I0223 13:17:55.046560 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-49pt2ng1l1349" Feb 23 13:17:55.067968 master-0 kubenswrapper[26474]: I0223 13:17:55.067787 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rlrxt" Feb 23 13:17:55.073101 master-0 kubenswrapper[26474]: I0223 13:17:55.073057 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 13:17:55.104458 master-0 kubenswrapper[26474]: I0223 13:17:55.103948 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:17:55.154500 master-0 kubenswrapper[26474]: I0223 13:17:55.154417 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:17:55.185061 master-0 kubenswrapper[26474]: I0223 13:17:55.185000 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-z5rh8" Feb 23 13:17:55.250955 master-0 kubenswrapper[26474]: I0223 13:17:55.250857 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:17:55.265084 master-0 kubenswrapper[26474]: I0223 13:17:55.264982 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:17:55.320162 master-0 kubenswrapper[26474]: I0223 13:17:55.320016 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:17:55.335663 master-0 kubenswrapper[26474]: I0223 13:17:55.335579 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:17:55.336873 master-0 kubenswrapper[26474]: I0223 13:17:55.336798 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:17:55.361828 master-0 kubenswrapper[26474]: I0223 13:17:55.361758 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:17:55.370240 master-0 kubenswrapper[26474]: I0223 13:17:55.370186 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:17:55.427955 master-0 kubenswrapper[26474]: I0223 13:17:55.427850 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:17:55.444551 master-0 kubenswrapper[26474]: I0223 13:17:55.444468 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:17:55.609694 master-0 kubenswrapper[26474]: I0223 13:17:55.609550 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:17:55.695582 master-0 kubenswrapper[26474]: I0223 13:17:55.695488 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:17:55.696362 master-0 kubenswrapper[26474]: I0223 13:17:55.696293 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:17:55.710658 master-0 kubenswrapper[26474]: I0223 13:17:55.710578 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:17:55.722526 master-0 kubenswrapper[26474]: I0223 13:17:55.722472 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:17:55.958159 master-0 kubenswrapper[26474]: I0223 13:17:55.957988 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:17:55.997980 master-0 kubenswrapper[26474]: I0223 13:17:55.997910 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:17:56.001049 master-0 kubenswrapper[26474]: I0223 13:17:56.000978 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:17:56.072532 master-0 kubenswrapper[26474]: I0223 13:17:56.072442 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:17:56.182493 master-0 kubenswrapper[26474]: I0223 13:17:56.182408 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2ttqf" Feb 23 13:17:56.234428 master-0 kubenswrapper[26474]: I0223 13:17:56.234237 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:17:56.265050 master-0 kubenswrapper[26474]: I0223 13:17:56.264989 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:17:56.390702 master-0 kubenswrapper[26474]: I0223 13:17:56.390610 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:17:56.410993 master-0 kubenswrapper[26474]: I0223 13:17:56.410887 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:17:56.411488 master-0 kubenswrapper[26474]: I0223 13:17:56.411112 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:17:56.417794 master-0 kubenswrapper[26474]: I0223 13:17:56.417738 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:17:56.675882 master-0 kubenswrapper[26474]: I0223 13:17:56.675565 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:17:56.676248 master-0 kubenswrapper[26474]: I0223 13:17:56.676206 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-klhdv" Feb 23 13:17:56.689418 master-0 kubenswrapper[26474]: I0223 13:17:56.689162 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:17:56.778076 master-0 kubenswrapper[26474]: I0223 13:17:56.777964 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:17:56.778476 master-0 kubenswrapper[26474]: I0223 13:17:56.778094 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:17:56.778476 master-0 kubenswrapper[26474]: E0223 13:17:56.778471 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f01779-caef-46f3-ac91-89f32798535b" containerName="installer" Feb 23 13:17:56.778624 master-0 kubenswrapper[26474]: I0223 13:17:56.778493 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f01779-caef-46f3-ac91-89f32798535b" containerName="installer" Feb 23 13:17:56.778624 master-0 kubenswrapper[26474]: E0223 13:17:56.778519 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" Feb 23 13:17:56.778624 master-0 kubenswrapper[26474]: I0223 13:17:56.778527 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" Feb 23 13:17:56.778865 master-0 kubenswrapper[26474]: I0223 13:17:56.778810 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0b32d2-df4f-44e9-a841-b7e925783400" containerName="console" Feb 23 13:17:56.778865 master-0 kubenswrapper[26474]: I0223 13:17:56.778849 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f01779-caef-46f3-ac91-89f32798535b" containerName="installer" Feb 23 13:17:56.779435 master-0 kubenswrapper[26474]: I0223 13:17:56.779390 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.789631 master-0 kubenswrapper[26474]: I0223 13:17:56.789266 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.796873 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.796961 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.796992 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.797078 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.797138 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.797211 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.798479 master-0 kubenswrapper[26474]: I0223 13:17:56.797249 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxss\" (UniqueName: \"kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.892089 master-0 kubenswrapper[26474]: I0223 13:17:56.891951 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:17:56.895879 master-0 kubenswrapper[26474]: I0223 13:17:56.895814 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:17:56.898663 master-0 kubenswrapper[26474]: I0223 13:17:56.898578 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.898836 master-0 kubenswrapper[26474]: I0223 13:17:56.898797 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.898959 master-0 kubenswrapper[26474]: I0223 13:17:56.898857 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.898959 master-0 kubenswrapper[26474]: I0223 13:17:56.898920 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.899181 master-0 kubenswrapper[26474]: I0223 13:17:56.898986 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.899181 master-0 kubenswrapper[26474]: I0223 13:17:56.899026 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.899181 master-0 kubenswrapper[26474]: I0223 13:17:56.899046 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxss\" (UniqueName: \"kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.900424 master-0 kubenswrapper[26474]: I0223 13:17:56.900308 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.900595 master-0 kubenswrapper[26474]: I0223 13:17:56.900535 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.900709 master-0 kubenswrapper[26474]: I0223 13:17:56.900670 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.900817 master-0 kubenswrapper[26474]: I0223 13:17:56.900707 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.904520 master-0 kubenswrapper[26474]: I0223 13:17:56.902861 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.904520 master-0 kubenswrapper[26474]: I0223 13:17:56.902975 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.917782 master-0 kubenswrapper[26474]: I0223 13:17:56.917724 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxss\" (UniqueName: \"kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss\") pod \"console-7788ddcf8d-rzkgn\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:56.926604 master-0 kubenswrapper[26474]: I0223 13:17:56.926512 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zjvhw" Feb 23 13:17:56.951680 master-0 kubenswrapper[26474]: I0223 13:17:56.951604 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:17:57.093406 master-0 kubenswrapper[26474]: I0223 13:17:57.093319 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 23 13:17:57.116355 master-0 kubenswrapper[26474]: I0223 13:17:57.116245 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:17:57.184191 master-0 kubenswrapper[26474]: I0223 13:17:57.184054 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:17:57.270872 master-0 kubenswrapper[26474]: I0223 13:17:57.270786 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:17:57.319780 master-0 kubenswrapper[26474]: I0223 13:17:57.319690 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:17:57.389662 master-0 kubenswrapper[26474]: I0223 13:17:57.389603 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6vfhg" Feb 23 13:17:57.420081 master-0 kubenswrapper[26474]: I0223 13:17:57.419976 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:17:57.463325 master-0 kubenswrapper[26474]: I0223 13:17:57.463107 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-m9scs" Feb 23 13:17:57.530661 master-0 kubenswrapper[26474]: I0223 13:17:57.530568 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:17:57.588051 master-0 kubenswrapper[26474]: I0223 13:17:57.587995 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:17:57.589707 master-0 kubenswrapper[26474]: W0223 13:17:57.589578 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fc605b_8e19_4779_9b58_afea88250452.slice/crio-0244c49f37b4f8b752ca6e5ef3babe2861bc8d3ac86402816bc837898a7283ec WatchSource:0}: Error finding container 0244c49f37b4f8b752ca6e5ef3babe2861bc8d3ac86402816bc837898a7283ec: Status 404 returned error can't find the container with id 0244c49f37b4f8b752ca6e5ef3babe2861bc8d3ac86402816bc837898a7283ec Feb 23 13:17:57.708419 master-0 kubenswrapper[26474]: I0223 13:17:57.707519 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:17:57.712641 master-0 kubenswrapper[26474]: I0223 13:17:57.712560 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:17:57.743422 master-0 kubenswrapper[26474]: I0223 13:17:57.743360 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:17:57.775974 master-0 kubenswrapper[26474]: I0223 13:17:57.775904 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 23 13:17:57.885202 master-0 kubenswrapper[26474]: I0223 13:17:57.884985 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:17:58.209793 master-0 kubenswrapper[26474]: I0223 13:17:58.209721 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7788ddcf8d-rzkgn" event={"ID":"19fc605b-8e19-4779-9b58-afea88250452","Type":"ContainerStarted","Data":"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2"} Feb 23 13:17:58.209793 master-0 kubenswrapper[26474]: I0223 13:17:58.209778 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7788ddcf8d-rzkgn" event={"ID":"19fc605b-8e19-4779-9b58-afea88250452","Type":"ContainerStarted","Data":"0244c49f37b4f8b752ca6e5ef3babe2861bc8d3ac86402816bc837898a7283ec"} Feb 23 13:17:58.228118 master-0 kubenswrapper[26474]: I0223 13:17:58.227983 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:17:58.229658 master-0 kubenswrapper[26474]: I0223 13:17:58.229542 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7788ddcf8d-rzkgn" podStartSLOduration=2.229520535 podStartE2EDuration="2.229520535s" podCreationTimestamp="2026-02-23 13:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:58.227907895 +0000 UTC m=+200.074415572" watchObservedRunningTime="2026-02-23 13:17:58.229520535 +0000 UTC m=+200.076028222" Feb 23 13:17:58.346369 master-0 kubenswrapper[26474]: I0223 13:17:58.346281 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:17:58.523786 master-0 kubenswrapper[26474]: I0223 13:17:58.523719 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:17:58.614435 master-0 kubenswrapper[26474]: I0223 13:17:58.614389 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:17:58.679361 master-0 kubenswrapper[26474]: I0223 13:17:58.679281 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:17:58.792811 master-0 kubenswrapper[26474]: I0223 13:17:58.792696 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:17:58.819713 master-0 kubenswrapper[26474]: I0223 13:17:58.819672 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:17:58.903687 master-0 kubenswrapper[26474]: I0223 13:17:58.903643 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:17:58.971928 master-0 kubenswrapper[26474]: I0223 13:17:58.971832 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:17:59.088244 master-0 kubenswrapper[26474]: I0223 13:17:59.088050 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wgs7j" Feb 23 13:17:59.548414 master-0 kubenswrapper[26474]: I0223 13:17:59.548322 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:17:59.887104 master-0 kubenswrapper[26474]: I0223 13:17:59.886835 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:18:00.023813 master-0 kubenswrapper[26474]: I0223 13:18:00.023715 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:18:06.644163 master-0 kubenswrapper[26474]: I0223 13:18:06.644010 26474 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:18:06.644997 master-0 kubenswrapper[26474]: I0223 13:18:06.644254 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" containerID="cri-o://9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7" gracePeriod=5 Feb 23 13:18:07.116940 master-0 kubenswrapper[26474]: I0223 13:18:07.116839 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:18:07.117445 master-0 kubenswrapper[26474]: I0223 13:18:07.117364 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:18:07.124731 master-0 kubenswrapper[26474]: I0223 13:18:07.124678 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:18:07.284238 master-0 kubenswrapper[26474]: I0223 13:18:07.284140 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:18:07.364402 master-0 kubenswrapper[26474]: I0223 13:18:07.362139 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:18:12.238881 master-0 kubenswrapper[26474]: I0223 13:18:12.238791 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 23 13:18:12.239513 master-0 kubenswrapper[26474]: I0223 13:18:12.238892 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:18:12.335568 master-0 kubenswrapper[26474]: I0223 13:18:12.329052 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 23 13:18:12.335568 master-0 kubenswrapper[26474]: I0223 13:18:12.329157 26474 generic.go:334] "Generic (PLEG): container finished" podID="2146f0e3671998cad8bbc2464b009ab7" containerID="9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7" exitCode=137 Feb 23 13:18:12.335568 master-0 kubenswrapper[26474]: I0223 13:18:12.329264 26474 scope.go:117] "RemoveContainer" containerID="9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7" Feb 23 13:18:12.335568 master-0 kubenswrapper[26474]: I0223 13:18:12.329548 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:18:12.350625 master-0 kubenswrapper[26474]: I0223 13:18:12.350543 26474 scope.go:117] "RemoveContainer" containerID="9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7" Feb 23 13:18:12.351139 master-0 kubenswrapper[26474]: E0223 13:18:12.351082 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7\": container with ID starting with 9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7 not found: ID does not exist" containerID="9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7" Feb 23 13:18:12.351196 master-0 kubenswrapper[26474]: I0223 13:18:12.351151 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7"} err="failed to get container status \"9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7\": rpc error: code = NotFound desc = could not find container \"9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7\": container with ID starting with 9a5a044963cd5261e2ea99f996c84e7a89ac86ae665bde1d93eb2c3926b298c7 not found: ID does not exist" Feb 23 13:18:12.396073 master-0 kubenswrapper[26474]: I0223 13:18:12.395944 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 23 13:18:12.396273 master-0 kubenswrapper[26474]: I0223 13:18:12.396102 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 23 13:18:12.396273 master-0 kubenswrapper[26474]: I0223 13:18:12.396157 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log" (OuterVolumeSpecName: "var-log") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:18:12.396273 master-0 kubenswrapper[26474]: I0223 13:18:12.396200 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 23 13:18:12.396399 master-0 kubenswrapper[26474]: I0223 13:18:12.396274 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests" (OuterVolumeSpecName: "manifests") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:18:12.396399 master-0 kubenswrapper[26474]: I0223 13:18:12.396301 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 23 13:18:12.396399 master-0 kubenswrapper[26474]: I0223 13:18:12.396333 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock" (OuterVolumeSpecName: "var-lock") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:18:12.396524 master-0 kubenswrapper[26474]: I0223 13:18:12.396495 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 23 13:18:12.396661 master-0 kubenswrapper[26474]: I0223 13:18:12.396632 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:18:12.397064 master-0 kubenswrapper[26474]: I0223 13:18:12.397037 26474 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:12.397064 master-0 kubenswrapper[26474]: I0223 13:18:12.397055 26474 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:12.397064 master-0 kubenswrapper[26474]: I0223 13:18:12.397065 26474 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:12.397161 master-0 kubenswrapper[26474]: I0223 13:18:12.397074 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:12.401676 master-0 kubenswrapper[26474]: I0223 13:18:12.401630 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 23 13:18:12.405191 master-0 kubenswrapper[26474]: I0223 13:18:12.405088 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:18:12.417925 master-0 kubenswrapper[26474]: I0223 13:18:12.417879 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:18:12.418041 master-0 kubenswrapper[26474]: I0223 13:18:12.417923 26474 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="13d195db-e742-4757-990e-69a3f610e368" Feb 23 13:18:12.424043 master-0 kubenswrapper[26474]: I0223 13:18:12.423919 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:18:12.424043 master-0 kubenswrapper[26474]: I0223 13:18:12.423974 26474 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="13d195db-e742-4757-990e-69a3f610e368" Feb 23 13:18:12.499056 master-0 kubenswrapper[26474]: I0223 13:18:12.498918 26474 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:14.402952 master-0 kubenswrapper[26474]: I0223 13:18:14.402803 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2146f0e3671998cad8bbc2464b009ab7" path="/var/lib/kubelet/pods/2146f0e3671998cad8bbc2464b009ab7/volumes" Feb 23 13:18:18.949386 master-0 kubenswrapper[26474]: I0223 13:18:18.946274 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7878b5757c-w9bdq" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" containerID="cri-o://2bc4645d5f921657472c9470fe31be381cbfbd65864b0e00a74d81c3cd016156" gracePeriod=15 Feb 23 13:18:19.392814 master-0 kubenswrapper[26474]: I0223 13:18:19.392766 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7878b5757c-w9bdq_64fd43ba-9c74-4497-aeab-d2c107eca1b1/console/0.log" Feb 23 13:18:19.393089 master-0 kubenswrapper[26474]: I0223 13:18:19.393061 26474 generic.go:334] "Generic (PLEG): container finished" podID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerID="2bc4645d5f921657472c9470fe31be381cbfbd65864b0e00a74d81c3cd016156" exitCode=2 Feb 23 13:18:19.393189 master-0 kubenswrapper[26474]: I0223 13:18:19.393125 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7878b5757c-w9bdq" event={"ID":"64fd43ba-9c74-4497-aeab-d2c107eca1b1","Type":"ContainerDied","Data":"2bc4645d5f921657472c9470fe31be381cbfbd65864b0e00a74d81c3cd016156"} Feb 23 13:18:19.443865 master-0 kubenswrapper[26474]: I0223 13:18:19.443798 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7878b5757c-w9bdq_64fd43ba-9c74-4497-aeab-d2c107eca1b1/console/0.log" Feb 23 13:18:19.444202 master-0 kubenswrapper[26474]: I0223 13:18:19.443893 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:18:19.585468 master-0 kubenswrapper[26474]: I0223 13:18:19.585299 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.585773 master-0 kubenswrapper[26474]: I0223 13:18:19.585522 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.585773 master-0 kubenswrapper[26474]: I0223 13:18:19.585574 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.585773 master-0 kubenswrapper[26474]: I0223 13:18:19.585697 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.586212 master-0 kubenswrapper[26474]: I0223 13:18:19.585780 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.586212 master-0 kubenswrapper[26474]: I0223 13:18:19.585851 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.586212 master-0 kubenswrapper[26474]: I0223 13:18:19.585931 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9qsf\" (UniqueName: \"kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf\") pod \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\" (UID: \"64fd43ba-9c74-4497-aeab-d2c107eca1b1\") " Feb 23 13:18:19.586789 master-0 kubenswrapper[26474]: I0223 13:18:19.586260 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config" (OuterVolumeSpecName: "console-config") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:19.586789 master-0 kubenswrapper[26474]: I0223 13:18:19.586566 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:19.586789 master-0 kubenswrapper[26474]: I0223 13:18:19.586624 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:19.587082 master-0 kubenswrapper[26474]: I0223 13:18:19.586828 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.587082 master-0 kubenswrapper[26474]: I0223 13:18:19.586851 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.587082 master-0 kubenswrapper[26474]: I0223 13:18:19.586863 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.587398 master-0 kubenswrapper[26474]: I0223 13:18:19.587053 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca" (OuterVolumeSpecName: "service-ca") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:19.588246 master-0 kubenswrapper[26474]: I0223 13:18:19.588158 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:18:19.589702 master-0 kubenswrapper[26474]: I0223 13:18:19.589637 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:18:19.591549 master-0 kubenswrapper[26474]: I0223 13:18:19.591239 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf" (OuterVolumeSpecName: "kube-api-access-t9qsf") pod "64fd43ba-9c74-4497-aeab-d2c107eca1b1" (UID: "64fd43ba-9c74-4497-aeab-d2c107eca1b1"). InnerVolumeSpecName "kube-api-access-t9qsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:18:19.688729 master-0 kubenswrapper[26474]: I0223 13:18:19.688562 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.688729 master-0 kubenswrapper[26474]: I0223 13:18:19.688647 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd43ba-9c74-4497-aeab-d2c107eca1b1-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.688729 master-0 kubenswrapper[26474]: I0223 13:18:19.688674 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64fd43ba-9c74-4497-aeab-d2c107eca1b1-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:19.688729 master-0 kubenswrapper[26474]: I0223 13:18:19.688701 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9qsf\" (UniqueName: \"kubernetes.io/projected/64fd43ba-9c74-4497-aeab-d2c107eca1b1-kube-api-access-t9qsf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:20.443403 master-0 kubenswrapper[26474]: I0223 13:18:20.443189 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7878b5757c-w9bdq_64fd43ba-9c74-4497-aeab-d2c107eca1b1/console/0.log" Feb 23 13:18:20.443403 master-0 kubenswrapper[26474]: I0223 13:18:20.443282 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7878b5757c-w9bdq" event={"ID":"64fd43ba-9c74-4497-aeab-d2c107eca1b1","Type":"ContainerDied","Data":"341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1"} Feb 23 13:18:20.443990 master-0 kubenswrapper[26474]: I0223 13:18:20.443430 26474 scope.go:117] "RemoveContainer" containerID="2bc4645d5f921657472c9470fe31be381cbfbd65864b0e00a74d81c3cd016156" Feb 23 13:18:20.443990 master-0 kubenswrapper[26474]: I0223 13:18:20.443587 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7878b5757c-w9bdq" Feb 23 13:18:20.490947 master-0 kubenswrapper[26474]: I0223 13:18:20.490802 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:18:20.495507 master-0 kubenswrapper[26474]: I0223 13:18:20.495446 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7878b5757c-w9bdq"] Feb 23 13:18:20.563697 master-0 kubenswrapper[26474]: E0223 13:18:20.563585 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fd43ba_9c74_4497_aeab_d2c107eca1b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fd43ba_9c74_4497_aeab_d2c107eca1b1.slice/crio-341d2300b8becdbe65ae7366eedbb67af7576bc00267c98d3e3971d4084686d1\": RecentStats: unable to find data in memory cache]" Feb 23 13:18:22.421476 master-0 kubenswrapper[26474]: I0223 13:18:22.421415 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" path="/var/lib/kubelet/pods/64fd43ba-9c74-4497-aeab-d2c107eca1b1/volumes" Feb 23 13:18:24.545088 master-0 kubenswrapper[26474]: I0223 13:18:24.544990 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:18:32.422741 master-0 kubenswrapper[26474]: I0223 13:18:32.422562 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-945f67446-9jmmn" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" containerID="cri-o://08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91" gracePeriod=15 Feb 23 13:18:32.466071 master-0 kubenswrapper[26474]: E0223 13:18:32.466005 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff605101_24a4_4034_b2c8_f8ca959464d5.slice/crio-08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:18:32.557641 master-0 kubenswrapper[26474]: I0223 13:18:32.557017 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-945f67446-9jmmn_ff605101-24a4-4034-b2c8-f8ca959464d5/console/0.log" Feb 23 13:18:32.557641 master-0 kubenswrapper[26474]: I0223 13:18:32.557086 26474 generic.go:334] "Generic (PLEG): container finished" podID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerID="08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91" exitCode=2 Feb 23 13:18:32.557641 master-0 kubenswrapper[26474]: I0223 13:18:32.557129 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-945f67446-9jmmn" event={"ID":"ff605101-24a4-4034-b2c8-f8ca959464d5","Type":"ContainerDied","Data":"08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91"} Feb 23 13:18:32.885024 master-0 kubenswrapper[26474]: I0223 13:18:32.884975 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-945f67446-9jmmn_ff605101-24a4-4034-b2c8-f8ca959464d5/console/0.log" Feb 23 13:18:32.885202 master-0 kubenswrapper[26474]: I0223 13:18:32.885048 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:18:33.036376 master-0 kubenswrapper[26474]: I0223 13:18:33.036272 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036376 master-0 kubenswrapper[26474]: I0223 13:18:33.036379 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036611 master-0 kubenswrapper[26474]: I0223 13:18:33.036416 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036611 master-0 kubenswrapper[26474]: I0223 13:18:33.036489 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036611 master-0 kubenswrapper[26474]: I0223 13:18:33.036533 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036611 master-0 kubenswrapper[26474]: I0223 13:18:33.036579 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.036741 master-0 kubenswrapper[26474]: I0223 13:18:33.036663 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfqn7\" (UniqueName: \"kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7\") pod \"ff605101-24a4-4034-b2c8-f8ca959464d5\" (UID: \"ff605101-24a4-4034-b2c8-f8ca959464d5\") " Feb 23 13:18:33.037455 master-0 kubenswrapper[26474]: I0223 13:18:33.037354 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca" (OuterVolumeSpecName: "service-ca") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:33.037455 master-0 kubenswrapper[26474]: I0223 13:18:33.037367 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:33.037596 master-0 kubenswrapper[26474]: I0223 13:18:33.037466 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:33.037878 master-0 kubenswrapper[26474]: I0223 13:18:33.037801 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config" (OuterVolumeSpecName: "console-config") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:18:33.039055 master-0 kubenswrapper[26474]: I0223 13:18:33.039015 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:18:33.039594 master-0 kubenswrapper[26474]: I0223 13:18:33.039553 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7" (OuterVolumeSpecName: "kube-api-access-jfqn7") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "kube-api-access-jfqn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:18:33.039946 master-0 kubenswrapper[26474]: I0223 13:18:33.039908 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ff605101-24a4-4034-b2c8-f8ca959464d5" (UID: "ff605101-24a4-4034-b2c8-f8ca959464d5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138121 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfqn7\" (UniqueName: \"kubernetes.io/projected/ff605101-24a4-4034-b2c8-f8ca959464d5-kube-api-access-jfqn7\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138162 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138172 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138181 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138189 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff605101-24a4-4034-b2c8-f8ca959464d5-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138197 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.138233 master-0 kubenswrapper[26474]: I0223 13:18:33.138207 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff605101-24a4-4034-b2c8-f8ca959464d5-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:18:33.565612 master-0 kubenswrapper[26474]: I0223 13:18:33.565564 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-945f67446-9jmmn_ff605101-24a4-4034-b2c8-f8ca959464d5/console/0.log" Feb 23 13:18:33.566086 master-0 kubenswrapper[26474]: I0223 13:18:33.565638 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-945f67446-9jmmn" event={"ID":"ff605101-24a4-4034-b2c8-f8ca959464d5","Type":"ContainerDied","Data":"69e275c52b2c887290d26a4fcbcd25a868dbe2b6e43e4f542aede766f7316fbf"} Feb 23 13:18:33.566086 master-0 kubenswrapper[26474]: I0223 13:18:33.565683 26474 scope.go:117] "RemoveContainer" containerID="08f4a2679b65ec141096dd259c3329584732b214055d07a7216d70d823114f91" Feb 23 13:18:33.566086 master-0 kubenswrapper[26474]: I0223 13:18:33.565815 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-945f67446-9jmmn" Feb 23 13:18:33.616072 master-0 kubenswrapper[26474]: I0223 13:18:33.615973 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:18:33.621750 master-0 kubenswrapper[26474]: I0223 13:18:33.621694 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-945f67446-9jmmn"] Feb 23 13:18:34.404686 master-0 kubenswrapper[26474]: I0223 13:18:34.404604 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" path="/var/lib/kubelet/pods/ff605101-24a4-4034-b2c8-f8ca959464d5/volumes" Feb 23 13:19:07.057502 master-0 kubenswrapper[26474]: I0223 13:19:07.057405 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: E0223 13:19:07.057739 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057756 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: E0223 13:19:07.057778 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057787 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: E0223 13:19:07.057802 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057809 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057943 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057975 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="64fd43ba-9c74-4497-aeab-d2c107eca1b1" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.057985 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff605101-24a4-4034-b2c8-f8ca959464d5" containerName="console" Feb 23 13:19:07.058715 master-0 kubenswrapper[26474]: I0223 13:19:07.058438 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.083265 master-0 kubenswrapper[26474]: I0223 13:19:07.083195 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:19:07.204041 master-0 kubenswrapper[26474]: I0223 13:19:07.203872 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204331 master-0 kubenswrapper[26474]: I0223 13:19:07.204114 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204331 master-0 kubenswrapper[26474]: I0223 13:19:07.204193 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204331 master-0 kubenswrapper[26474]: I0223 13:19:07.204289 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204684 master-0 kubenswrapper[26474]: I0223 13:19:07.204378 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204684 master-0 kubenswrapper[26474]: I0223 13:19:07.204439 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jk9\" (UniqueName: \"kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.204684 master-0 kubenswrapper[26474]: I0223 13:19:07.204605 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.306244 master-0 kubenswrapper[26474]: I0223 13:19:07.306160 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.306599 master-0 kubenswrapper[26474]: I0223 13:19:07.306486 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307296 master-0 kubenswrapper[26474]: I0223 13:19:07.307251 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307376 master-0 kubenswrapper[26474]: I0223 13:19:07.307253 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jk9\" (UniqueName: \"kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307472 master-0 kubenswrapper[26474]: I0223 13:19:07.307443 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307615 master-0 kubenswrapper[26474]: I0223 13:19:07.307552 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307615 master-0 kubenswrapper[26474]: I0223 13:19:07.307585 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307711 master-0 kubenswrapper[26474]: I0223 13:19:07.307622 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.307711 master-0 kubenswrapper[26474]: I0223 13:19:07.307688 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.308967 master-0 kubenswrapper[26474]: I0223 13:19:07.308834 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.308967 master-0 kubenswrapper[26474]: I0223 13:19:07.308843 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.313391 master-0 kubenswrapper[26474]: I0223 13:19:07.312326 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.313391 master-0 kubenswrapper[26474]: I0223 13:19:07.312659 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.325633 master-0 kubenswrapper[26474]: I0223 13:19:07.325586 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jk9\" (UniqueName: \"kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9\") pod \"console-bd776f658-lwrp8\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.385287 master-0 kubenswrapper[26474]: I0223 13:19:07.385202 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:07.834607 master-0 kubenswrapper[26474]: I0223 13:19:07.834522 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:19:08.839672 master-0 kubenswrapper[26474]: I0223 13:19:08.839603 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd776f658-lwrp8" event={"ID":"9e1d93bf-9366-4a73-90e2-8fc9acec810b","Type":"ContainerStarted","Data":"f36fcec11f3d8bb1c7f4b1af48da0a6a1d17052a118731f1488660c217cf7447"} Feb 23 13:19:08.839672 master-0 kubenswrapper[26474]: I0223 13:19:08.839656 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd776f658-lwrp8" event={"ID":"9e1d93bf-9366-4a73-90e2-8fc9acec810b","Type":"ContainerStarted","Data":"78b3b187413aecf7aadd3e6bb8f8a0f50e61056bc8ac371436ec5d2b4f3e0d08"} Feb 23 13:19:08.861654 master-0 kubenswrapper[26474]: I0223 13:19:08.861549 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bd776f658-lwrp8" podStartSLOduration=1.861526287 podStartE2EDuration="1.861526287s" podCreationTimestamp="2026-02-23 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:19:08.856408093 +0000 UTC m=+270.702915770" watchObservedRunningTime="2026-02-23 13:19:08.861526287 +0000 UTC m=+270.708033974" Feb 23 13:19:17.386026 master-0 kubenswrapper[26474]: I0223 13:19:17.385980 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:17.386506 master-0 kubenswrapper[26474]: I0223 13:19:17.386406 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:17.390461 master-0 kubenswrapper[26474]: I0223 13:19:17.390426 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:17.657487 master-0 kubenswrapper[26474]: I0223 13:19:17.657426 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:19:17.674751 master-0 kubenswrapper[26474]: I0223 13:19:17.674651 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.675093 master-0 kubenswrapper[26474]: I0223 13:19:17.674752 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.675093 master-0 kubenswrapper[26474]: I0223 13:19:17.674808 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.675492 master-0 kubenswrapper[26474]: I0223 13:19:17.675449 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log" (OuterVolumeSpecName: "audit-log") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:19:17.675592 master-0 kubenswrapper[26474]: I0223 13:19:17.675453 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:17.677357 master-0 kubenswrapper[26474]: I0223 13:19:17.676444 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:17.776867 master-0 kubenswrapper[26474]: I0223 13:19:17.776211 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.777097 master-0 kubenswrapper[26474]: I0223 13:19:17.776907 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.777166 master-0 kubenswrapper[26474]: I0223 13:19:17.777139 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.777652 master-0 kubenswrapper[26474]: I0223 13:19:17.777594 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") pod \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\" (UID: \"65ecd69f-3f1b-41d7-ba1f-225acaa735d7\") " Feb 23 13:19:17.778221 master-0 kubenswrapper[26474]: I0223 13:19:17.778190 26474 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.778221 master-0 kubenswrapper[26474]: I0223 13:19:17.778216 26474 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.778333 master-0 kubenswrapper[26474]: I0223 13:19:17.778230 26474 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.779991 master-0 kubenswrapper[26474]: I0223 13:19:17.779934 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:19:17.780279 master-0 kubenswrapper[26474]: I0223 13:19:17.780200 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx" (OuterVolumeSpecName: "kube-api-access-l8wvx") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "kube-api-access-l8wvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:19:17.781531 master-0 kubenswrapper[26474]: I0223 13:19:17.781479 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:19:17.781788 master-0 kubenswrapper[26474]: I0223 13:19:17.781749 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "65ecd69f-3f1b-41d7-ba1f-225acaa735d7" (UID: "65ecd69f-3f1b-41d7-ba1f-225acaa735d7"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:19:17.878966 master-0 kubenswrapper[26474]: I0223 13:19:17.878766 26474 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.878966 master-0 kubenswrapper[26474]: I0223 13:19:17.878823 26474 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.878966 master-0 kubenswrapper[26474]: I0223 13:19:17.878840 26474 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.878966 master-0 kubenswrapper[26474]: I0223 13:19:17.878856 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8wvx\" (UniqueName: \"kubernetes.io/projected/65ecd69f-3f1b-41d7-ba1f-225acaa735d7-kube-api-access-l8wvx\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.904236 master-0 kubenswrapper[26474]: I0223 13:19:17.904141 26474 generic.go:334] "Generic (PLEG): container finished" podID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" containerID="3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438" exitCode=0 Feb 23 13:19:17.904236 master-0 kubenswrapper[26474]: I0223 13:19:17.904252 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" Feb 23 13:19:17.904236 master-0 kubenswrapper[26474]: I0223 13:19:17.904235 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" event={"ID":"65ecd69f-3f1b-41d7-ba1f-225acaa735d7","Type":"ContainerDied","Data":"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438"} Feb 23 13:19:17.904614 master-0 kubenswrapper[26474]: I0223 13:19:17.904458 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f7f878d4-746vx" event={"ID":"65ecd69f-3f1b-41d7-ba1f-225acaa735d7","Type":"ContainerDied","Data":"aacef958695b1652452330209b40a5322d7de81c0ce86e84b51d42da90b8a1df"} Feb 23 13:19:17.904614 master-0 kubenswrapper[26474]: I0223 13:19:17.904524 26474 scope.go:117] "RemoveContainer" containerID="3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438" Feb 23 13:19:17.907896 master-0 kubenswrapper[26474]: I0223 13:19:17.907859 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:19:17.925296 master-0 kubenswrapper[26474]: I0223 13:19:17.925252 26474 scope.go:117] "RemoveContainer" containerID="3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438" Feb 23 13:19:17.929690 master-0 kubenswrapper[26474]: E0223 13:19:17.928296 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438\": container with ID starting with 3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438 not found: ID does not exist" containerID="3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438" Feb 23 13:19:17.929690 master-0 kubenswrapper[26474]: I0223 13:19:17.928379 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438"} err="failed to get container status \"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438\": rpc error: code = NotFound desc = could not find container \"3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438\": container with ID starting with 3be7a1d9c39c7d5204b43012f5d093fa4cd1ab8a34b61382972dca1baaa49438 not found: ID does not exist" Feb 23 13:19:17.952485 master-0 kubenswrapper[26474]: I0223 13:19:17.952418 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:19:17.973382 master-0 kubenswrapper[26474]: I0223 13:19:17.973301 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-69f7f878d4-746vx"] Feb 23 13:19:17.987074 master-0 kubenswrapper[26474]: I0223 13:19:17.986999 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:19:18.401006 master-0 kubenswrapper[26474]: I0223 13:19:18.400954 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" path="/var/lib/kubelet/pods/65ecd69f-3f1b-41d7-ba1f-225acaa735d7/volumes" Feb 23 13:19:38.416358 master-0 kubenswrapper[26474]: I0223 13:19:38.416237 26474 kubelet.go:1505] "Image garbage collection succeeded" Feb 23 13:19:43.028376 master-0 kubenswrapper[26474]: I0223 13:19:43.028242 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7788ddcf8d-rzkgn" podUID="19fc605b-8e19-4779-9b58-afea88250452" containerName="console" containerID="cri-o://750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2" gracePeriod=15 Feb 23 13:19:43.557266 master-0 kubenswrapper[26474]: I0223 13:19:43.557113 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7788ddcf8d-rzkgn_19fc605b-8e19-4779-9b58-afea88250452/console/0.log" Feb 23 13:19:43.557266 master-0 kubenswrapper[26474]: I0223 13:19:43.557206 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:19:43.718806 master-0 kubenswrapper[26474]: I0223 13:19:43.718698 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpxss\" (UniqueName: \"kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.718806 master-0 kubenswrapper[26474]: I0223 13:19:43.718783 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.719266 master-0 kubenswrapper[26474]: I0223 13:19:43.718833 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.719266 master-0 kubenswrapper[26474]: I0223 13:19:43.718915 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.719266 master-0 kubenswrapper[26474]: I0223 13:19:43.719016 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.719266 master-0 kubenswrapper[26474]: I0223 13:19:43.719146 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.719266 master-0 kubenswrapper[26474]: I0223 13:19:43.719185 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle\") pod \"19fc605b-8e19-4779-9b58-afea88250452\" (UID: \"19fc605b-8e19-4779-9b58-afea88250452\") " Feb 23 13:19:43.720037 master-0 kubenswrapper[26474]: I0223 13:19:43.719843 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config" (OuterVolumeSpecName: "console-config") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:43.720037 master-0 kubenswrapper[26474]: I0223 13:19:43.719909 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:43.720037 master-0 kubenswrapper[26474]: I0223 13:19:43.719971 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:43.720318 master-0 kubenswrapper[26474]: I0223 13:19:43.720137 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca" (OuterVolumeSpecName: "service-ca") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:19:43.722738 master-0 kubenswrapper[26474]: I0223 13:19:43.722684 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss" (OuterVolumeSpecName: "kube-api-access-rpxss") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "kube-api-access-rpxss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:19:43.723219 master-0 kubenswrapper[26474]: I0223 13:19:43.723134 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:19:43.724870 master-0 kubenswrapper[26474]: I0223 13:19:43.724795 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "19fc605b-8e19-4779-9b58-afea88250452" (UID: "19fc605b-8e19-4779-9b58-afea88250452"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822229 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822297 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc605b-8e19-4779-9b58-afea88250452-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822320 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822357 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822379 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpxss\" (UniqueName: \"kubernetes.io/projected/19fc605b-8e19-4779-9b58-afea88250452-kube-api-access-rpxss\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822395 master-0 kubenswrapper[26474]: I0223 13:19:43.822402 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:43.822872 master-0 kubenswrapper[26474]: I0223 13:19:43.822424 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19fc605b-8e19-4779-9b58-afea88250452-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:44.125648 master-0 kubenswrapper[26474]: I0223 13:19:44.125525 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7788ddcf8d-rzkgn_19fc605b-8e19-4779-9b58-afea88250452/console/0.log" Feb 23 13:19:44.125648 master-0 kubenswrapper[26474]: I0223 13:19:44.125591 26474 generic.go:334] "Generic (PLEG): container finished" podID="19fc605b-8e19-4779-9b58-afea88250452" containerID="750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2" exitCode=2 Feb 23 13:19:44.125648 master-0 kubenswrapper[26474]: I0223 13:19:44.125627 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7788ddcf8d-rzkgn" event={"ID":"19fc605b-8e19-4779-9b58-afea88250452","Type":"ContainerDied","Data":"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2"} Feb 23 13:19:44.125648 master-0 kubenswrapper[26474]: I0223 13:19:44.125664 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7788ddcf8d-rzkgn" event={"ID":"19fc605b-8e19-4779-9b58-afea88250452","Type":"ContainerDied","Data":"0244c49f37b4f8b752ca6e5ef3babe2861bc8d3ac86402816bc837898a7283ec"} Feb 23 13:19:44.126458 master-0 kubenswrapper[26474]: I0223 13:19:44.125669 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7788ddcf8d-rzkgn" Feb 23 13:19:44.126458 master-0 kubenswrapper[26474]: I0223 13:19:44.125689 26474 scope.go:117] "RemoveContainer" containerID="750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2" Feb 23 13:19:44.150978 master-0 kubenswrapper[26474]: I0223 13:19:44.150924 26474 scope.go:117] "RemoveContainer" containerID="750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2" Feb 23 13:19:44.156907 master-0 kubenswrapper[26474]: E0223 13:19:44.152155 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2\": container with ID starting with 750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2 not found: ID does not exist" containerID="750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2" Feb 23 13:19:44.156907 master-0 kubenswrapper[26474]: I0223 13:19:44.152208 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2"} err="failed to get container status \"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2\": rpc error: code = NotFound desc = could not find container \"750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2\": container with ID starting with 750db1cd85fed825442129e5557e59a20c8f7f9fdd658ca3aead18d9d9d300d2 not found: ID does not exist" Feb 23 13:19:44.168495 master-0 kubenswrapper[26474]: I0223 13:19:44.167834 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:19:44.176318 master-0 kubenswrapper[26474]: I0223 13:19:44.176250 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7788ddcf8d-rzkgn"] Feb 23 13:19:44.403734 master-0 kubenswrapper[26474]: I0223 13:19:44.402545 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc605b-8e19-4779-9b58-afea88250452" path="/var/lib/kubelet/pods/19fc605b-8e19-4779-9b58-afea88250452/volumes" Feb 23 13:20:04.279051 master-0 kubenswrapper[26474]: I0223 13:20:04.278905 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: E0223 13:20:04.279569 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" containerName="metrics-server" Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: I0223 13:20:04.279600 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" containerName="metrics-server" Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: E0223 13:20:04.279638 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc605b-8e19-4779-9b58-afea88250452" containerName="console" Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: I0223 13:20:04.279652 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc605b-8e19-4779-9b58-afea88250452" containerName="console" Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: I0223 13:20:04.279919 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fc605b-8e19-4779-9b58-afea88250452" containerName="console" Feb 23 13:20:04.280167 master-0 kubenswrapper[26474]: I0223 13:20:04.280017 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ecd69f-3f1b-41d7-ba1f-225acaa735d7" containerName="metrics-server" Feb 23 13:20:04.281042 master-0 kubenswrapper[26474]: I0223 13:20:04.280977 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.286310 master-0 kubenswrapper[26474]: I0223 13:20:04.286198 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 23 13:20:04.286597 master-0 kubenswrapper[26474]: I0223 13:20:04.286213 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 23 13:20:04.286597 master-0 kubenswrapper[26474]: I0223 13:20:04.286505 26474 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 23 13:20:04.286804 master-0 kubenswrapper[26474]: I0223 13:20:04.286638 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 23 13:20:04.318435 master-0 kubenswrapper[26474]: I0223 13:20:04.301483 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:20:04.402068 master-0 kubenswrapper[26474]: I0223 13:20:04.401911 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.402384 master-0 kubenswrapper[26474]: I0223 13:20:04.402115 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.402384 master-0 kubenswrapper[26474]: I0223 13:20:04.402194 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrbf\" (UniqueName: \"kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.506545 master-0 kubenswrapper[26474]: I0223 13:20:04.506407 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsrbf\" (UniqueName: \"kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.508810 master-0 kubenswrapper[26474]: I0223 13:20:04.508717 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.509148 master-0 kubenswrapper[26474]: I0223 13:20:04.509096 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.511574 master-0 kubenswrapper[26474]: I0223 13:20:04.511462 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.516138 master-0 kubenswrapper[26474]: I0223 13:20:04.516070 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.543991 master-0 kubenswrapper[26474]: I0223 13:20:04.543805 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsrbf\" (UniqueName: \"kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf\") pod \"sushy-emulator-78f6d7d749-wtnqn\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:04.630477 master-0 kubenswrapper[26474]: I0223 13:20:04.630332 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:05.130069 master-0 kubenswrapper[26474]: I0223 13:20:05.129993 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:20:05.342584 master-0 kubenswrapper[26474]: I0223 13:20:05.342501 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" event={"ID":"16b626ff-7dab-4ad6-9ad8-d639af21bedc","Type":"ContainerStarted","Data":"cc54ed9d1faac2d7fcd22c7c5b6df33729422c37f5b5268f14d156941cce777d"} Feb 23 13:20:12.411772 master-0 kubenswrapper[26474]: I0223 13:20:12.411238 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" event={"ID":"16b626ff-7dab-4ad6-9ad8-d639af21bedc","Type":"ContainerStarted","Data":"1199bc619f5c722ce342004dcb78f2414d734b39dce3ca2ec709699b56716ff2"} Feb 23 13:20:14.631378 master-0 kubenswrapper[26474]: I0223 13:20:14.631234 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:14.631378 master-0 kubenswrapper[26474]: I0223 13:20:14.631374 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:14.645817 master-0 kubenswrapper[26474]: I0223 13:20:14.645763 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:14.667980 master-0 kubenswrapper[26474]: I0223 13:20:14.667826 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" podStartSLOduration=3.695363017 podStartE2EDuration="10.667801552s" podCreationTimestamp="2026-02-23 13:20:04 +0000 UTC" firstStartedPulling="2026-02-23 13:20:05.137725027 +0000 UTC m=+326.984232704" lastFinishedPulling="2026-02-23 13:20:12.110163562 +0000 UTC m=+333.956671239" observedRunningTime="2026-02-23 13:20:12.433652566 +0000 UTC m=+334.280160273" watchObservedRunningTime="2026-02-23 13:20:14.667801552 +0000 UTC m=+336.514309229" Feb 23 13:20:15.440700 master-0 kubenswrapper[26474]: I0223 13:20:15.440610 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:20:23.939857 master-0 kubenswrapper[26474]: I0223 13:20:23.939702 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:20:23.941530 master-0 kubenswrapper[26474]: I0223 13:20:23.941484 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:23.944379 master-0 kubenswrapper[26474]: I0223 13:20:23.944279 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-lz65v" Feb 23 13:20:23.944855 master-0 kubenswrapper[26474]: I0223 13:20:23.944807 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:20:23.958244 master-0 kubenswrapper[26474]: I0223 13:20:23.958142 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:20:24.003644 master-0 kubenswrapper[26474]: I0223 13:20:24.003539 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.004060 master-0 kubenswrapper[26474]: I0223 13:20:24.003985 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.004587 master-0 kubenswrapper[26474]: I0223 13:20:24.004490 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.108389 master-0 kubenswrapper[26474]: I0223 13:20:24.107160 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.108389 master-0 kubenswrapper[26474]: I0223 13:20:24.107396 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.108389 master-0 kubenswrapper[26474]: I0223 13:20:24.108129 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.108389 master-0 kubenswrapper[26474]: I0223 13:20:24.108271 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.108389 master-0 kubenswrapper[26474]: I0223 13:20:24.108330 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.125890 master-0 kubenswrapper[26474]: I0223 13:20:24.125798 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access\") pod \"installer-5-master-0\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.291303 master-0 kubenswrapper[26474]: I0223 13:20:24.291160 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:20:24.829449 master-0 kubenswrapper[26474]: I0223 13:20:24.829372 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:20:25.546770 master-0 kubenswrapper[26474]: I0223 13:20:25.546692 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"21d48679-b77a-44e4-96d0-1006527f35f9","Type":"ContainerStarted","Data":"ca54f73c123d6c3585991435ded2144b1ff312717f2b8c3c61dc8f6699e41929"} Feb 23 13:20:25.546770 master-0 kubenswrapper[26474]: I0223 13:20:25.546773 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"21d48679-b77a-44e4-96d0-1006527f35f9","Type":"ContainerStarted","Data":"963df20434925d58deb014f74e88566beca5320d3f7d5c726fddbc1eb1b70039"} Feb 23 13:20:25.593728 master-0 kubenswrapper[26474]: I0223 13:20:25.593618 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.593596276 podStartE2EDuration="2.593596276s" podCreationTimestamp="2026-02-23 13:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:20:25.576169831 +0000 UTC m=+347.422677588" watchObservedRunningTime="2026-02-23 13:20:25.593596276 +0000 UTC m=+347.440103963" Feb 23 13:20:34.445658 master-0 kubenswrapper[26474]: I0223 13:20:34.445563 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-799fc8779-hsmgl"] Feb 23 13:20:34.447076 master-0 kubenswrapper[26474]: I0223 13:20:34.447023 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.448088 master-0 kubenswrapper[26474]: I0223 13:20:34.447968 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-799fc8779-hsmgl"] Feb 23 13:20:34.514481 master-0 kubenswrapper[26474]: I0223 13:20:34.513857 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlfb\" (UniqueName: \"kubernetes.io/projected/c3c6d397-5002-4fa8-9630-fcae89627ebe-kube-api-access-wqlfb\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.514481 master-0 kubenswrapper[26474]: I0223 13:20:34.513934 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c3c6d397-5002-4fa8-9630-fcae89627ebe-os-client-config\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.616018 master-0 kubenswrapper[26474]: I0223 13:20:34.615922 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlfb\" (UniqueName: \"kubernetes.io/projected/c3c6d397-5002-4fa8-9630-fcae89627ebe-kube-api-access-wqlfb\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.616282 master-0 kubenswrapper[26474]: I0223 13:20:34.616084 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c3c6d397-5002-4fa8-9630-fcae89627ebe-os-client-config\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.622061 master-0 kubenswrapper[26474]: I0223 13:20:34.621994 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c3c6d397-5002-4fa8-9630-fcae89627ebe-os-client-config\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.643373 master-0 kubenswrapper[26474]: I0223 13:20:34.643293 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlfb\" (UniqueName: \"kubernetes.io/projected/c3c6d397-5002-4fa8-9630-fcae89627ebe-kube-api-access-wqlfb\") pod \"nova-console-poller-799fc8779-hsmgl\" (UID: \"c3c6d397-5002-4fa8-9630-fcae89627ebe\") " pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:34.775039 master-0 kubenswrapper[26474]: I0223 13:20:34.774960 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" Feb 23 13:20:35.182986 master-0 kubenswrapper[26474]: I0223 13:20:35.182885 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-799fc8779-hsmgl"] Feb 23 13:20:35.187907 master-0 kubenswrapper[26474]: W0223 13:20:35.187840 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3c6d397_5002_4fa8_9630_fcae89627ebe.slice/crio-29d9e89ca53aa1ae536cbfb9419f14ac4d0be9723bcdc4c2c9543d95ccb409c0 WatchSource:0}: Error finding container 29d9e89ca53aa1ae536cbfb9419f14ac4d0be9723bcdc4c2c9543d95ccb409c0: Status 404 returned error can't find the container with id 29d9e89ca53aa1ae536cbfb9419f14ac4d0be9723bcdc4c2c9543d95ccb409c0 Feb 23 13:20:35.192181 master-0 kubenswrapper[26474]: I0223 13:20:35.192114 26474 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:20:35.639176 master-0 kubenswrapper[26474]: I0223 13:20:35.639088 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" event={"ID":"c3c6d397-5002-4fa8-9630-fcae89627ebe","Type":"ContainerStarted","Data":"29d9e89ca53aa1ae536cbfb9419f14ac4d0be9723bcdc4c2c9543d95ccb409c0"} Feb 23 13:20:41.690288 master-0 kubenswrapper[26474]: I0223 13:20:41.690219 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" event={"ID":"c3c6d397-5002-4fa8-9630-fcae89627ebe","Type":"ContainerStarted","Data":"107b1cf9c5f78006eaaa4c27f383a2f4fd5ccae4c8a15566946af6d38c418de8"} Feb 23 13:20:42.700732 master-0 kubenswrapper[26474]: I0223 13:20:42.700635 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" event={"ID":"c3c6d397-5002-4fa8-9630-fcae89627ebe","Type":"ContainerStarted","Data":"106cdf0753a551bde786e1981fbf205e60e22dacfbe35bb0f3dba2839c968dd0"} Feb 23 13:20:42.729244 master-0 kubenswrapper[26474]: I0223 13:20:42.729127 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-799fc8779-hsmgl" podStartSLOduration=2.187345529 podStartE2EDuration="8.729100112s" podCreationTimestamp="2026-02-23 13:20:34 +0000 UTC" firstStartedPulling="2026-02-23 13:20:35.192040248 +0000 UTC m=+357.038548015" lastFinishedPulling="2026-02-23 13:20:41.733794911 +0000 UTC m=+363.580302598" observedRunningTime="2026-02-23 13:20:42.728073487 +0000 UTC m=+364.574581204" watchObservedRunningTime="2026-02-23 13:20:42.729100112 +0000 UTC m=+364.575607799" Feb 23 13:20:58.483085 master-0 kubenswrapper[26474]: I0223 13:20:58.482983 26474 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:20:58.484300 master-0 kubenswrapper[26474]: I0223 13:20:58.483391 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="cluster-policy-controller" containerID="cri-o://6d1d2a690e1d1c47fa4cec1c840fe9083bc8bf1097a1a9a0b84ede40886e22da" gracePeriod=30 Feb 23 13:20:58.484300 master-0 kubenswrapper[26474]: I0223 13:20:58.483506 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://225b72ffe810de606c91db96bda704162eba140695b0d114f42ad9b5f7338027" gracePeriod=30 Feb 23 13:20:58.484300 master-0 kubenswrapper[26474]: I0223 13:20:58.483564 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://318802d3ffb8951642b7de6e2fcdce57f2f19df5bc9dbb49de74dac1fb692661" gracePeriod=30 Feb 23 13:20:58.484300 master-0 kubenswrapper[26474]: I0223 13:20:58.483556 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" containerID="cri-o://2e3d12f7546ed9dc911e6b0badc88fa73138850feb384e2188c5098c9007f1a4" gracePeriod=30 Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485261 26474 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: E0223 13:20:58.485734 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="cluster-policy-controller" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485758 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="cluster-policy-controller" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: E0223 13:20:58.485775 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485787 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: E0223 13:20:58.485813 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-cert-syncer" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485825 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-cert-syncer" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: E0223 13:20:58.485852 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-recovery-controller" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485862 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-recovery-controller" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: E0223 13:20:58.485893 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.485898 master-0 kubenswrapper[26474]: I0223 13:20:58.485905 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486131 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-cert-syncer" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486180 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486202 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="cluster-policy-controller" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486214 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486238 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager-recovery-controller" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: E0223 13:20:58.486465 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486484 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.487472 master-0 kubenswrapper[26474]: I0223 13:20:58.486719 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerName="kube-controller-manager" Feb 23 13:20:58.672727 master-0 kubenswrapper[26474]: I0223 13:20:58.672629 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.672932 master-0 kubenswrapper[26474]: I0223 13:20:58.672812 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.775610 master-0 kubenswrapper[26474]: I0223 13:20:58.775510 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.775834 master-0 kubenswrapper[26474]: I0223 13:20:58.775637 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.775889 master-0 kubenswrapper[26474]: I0223 13:20:58.775815 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.775932 master-0 kubenswrapper[26474]: I0223 13:20:58.775837 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/58692b75be3a1359e52d345f8172ff0f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"58692b75be3a1359e52d345f8172ff0f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.807887 master-0 kubenswrapper[26474]: I0223 13:20:58.807797 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/1.log" Feb 23 13:20:58.809650 master-0 kubenswrapper[26474]: I0223 13:20:58.809611 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager-cert-syncer/0.log" Feb 23 13:20:58.810306 master-0 kubenswrapper[26474]: I0223 13:20:58.810252 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.816256 master-0 kubenswrapper[26474]: I0223 13:20:58.816152 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5b3e061f9d09dab5dbaef15b3f1e67a0" podUID="58692b75be3a1359e52d345f8172ff0f" Feb 23 13:20:58.850096 master-0 kubenswrapper[26474]: I0223 13:20:58.850019 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager/1.log" Feb 23 13:20:58.851032 master-0 kubenswrapper[26474]: I0223 13:20:58.850998 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager-cert-syncer/0.log" Feb 23 13:20:58.851445 master-0 kubenswrapper[26474]: I0223 13:20:58.851401 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="2e3d12f7546ed9dc911e6b0badc88fa73138850feb384e2188c5098c9007f1a4" exitCode=0 Feb 23 13:20:58.851445 master-0 kubenswrapper[26474]: I0223 13:20:58.851435 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="225b72ffe810de606c91db96bda704162eba140695b0d114f42ad9b5f7338027" exitCode=0 Feb 23 13:20:58.851445 master-0 kubenswrapper[26474]: I0223 13:20:58.851446 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="318802d3ffb8951642b7de6e2fcdce57f2f19df5bc9dbb49de74dac1fb692661" exitCode=2 Feb 23 13:20:58.851546 master-0 kubenswrapper[26474]: I0223 13:20:58.851456 26474 generic.go:334] "Generic (PLEG): container finished" podID="5b3e061f9d09dab5dbaef15b3f1e67a0" containerID="6d1d2a690e1d1c47fa4cec1c840fe9083bc8bf1097a1a9a0b84ede40886e22da" exitCode=0 Feb 23 13:20:58.851546 master-0 kubenswrapper[26474]: I0223 13:20:58.851526 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc4d37a7c196893b681f7266fada584fdde2bd3754f734040ae9f8026f3c584" Feb 23 13:20:58.851546 master-0 kubenswrapper[26474]: I0223 13:20:58.851547 26474 scope.go:117] "RemoveContainer" containerID="e375fe5c02f0608ef4aac501c8122f7edac3d21f041acfb53911dc7efc555b71" Feb 23 13:20:58.851725 master-0 kubenswrapper[26474]: I0223 13:20:58.851692 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:20:58.856901 master-0 kubenswrapper[26474]: I0223 13:20:58.856839 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5b3e061f9d09dab5dbaef15b3f1e67a0" podUID="58692b75be3a1359e52d345f8172ff0f" Feb 23 13:20:58.857028 master-0 kubenswrapper[26474]: I0223 13:20:58.856999 26474 generic.go:334] "Generic (PLEG): container finished" podID="21d48679-b77a-44e4-96d0-1006527f35f9" containerID="ca54f73c123d6c3585991435ded2144b1ff312717f2b8c3c61dc8f6699e41929" exitCode=0 Feb 23 13:20:58.857090 master-0 kubenswrapper[26474]: I0223 13:20:58.857042 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"21d48679-b77a-44e4-96d0-1006527f35f9","Type":"ContainerDied","Data":"ca54f73c123d6c3585991435ded2144b1ff312717f2b8c3c61dc8f6699e41929"} Feb 23 13:20:58.981120 master-0 kubenswrapper[26474]: I0223 13:20:58.981047 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") pod \"5b3e061f9d09dab5dbaef15b3f1e67a0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " Feb 23 13:20:58.981321 master-0 kubenswrapper[26474]: I0223 13:20:58.981192 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5b3e061f9d09dab5dbaef15b3f1e67a0" (UID: "5b3e061f9d09dab5dbaef15b3f1e67a0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:58.981321 master-0 kubenswrapper[26474]: I0223 13:20:58.981211 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") pod \"5b3e061f9d09dab5dbaef15b3f1e67a0\" (UID: \"5b3e061f9d09dab5dbaef15b3f1e67a0\") " Feb 23 13:20:58.981321 master-0 kubenswrapper[26474]: I0223 13:20:58.981263 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "5b3e061f9d09dab5dbaef15b3f1e67a0" (UID: "5b3e061f9d09dab5dbaef15b3f1e67a0"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:58.981970 master-0 kubenswrapper[26474]: I0223 13:20:58.981894 26474 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:20:58.982026 master-0 kubenswrapper[26474]: I0223 13:20:58.981993 26474 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b3e061f9d09dab5dbaef15b3f1e67a0-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:20:59.168726 master-0 kubenswrapper[26474]: I0223 13:20:59.168656 26474 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5b3e061f9d09dab5dbaef15b3f1e67a0" podUID="58692b75be3a1359e52d345f8172ff0f" Feb 23 13:20:59.878367 master-0 kubenswrapper[26474]: I0223 13:20:59.878281 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5b3e061f9d09dab5dbaef15b3f1e67a0/kube-controller-manager-cert-syncer/0.log" Feb 23 13:21:00.284030 master-0 kubenswrapper[26474]: I0223 13:21:00.283957 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:21:00.304897 master-0 kubenswrapper[26474]: I0223 13:21:00.304824 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir\") pod \"21d48679-b77a-44e4-96d0-1006527f35f9\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " Feb 23 13:21:00.305115 master-0 kubenswrapper[26474]: I0223 13:21:00.304964 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock\") pod \"21d48679-b77a-44e4-96d0-1006527f35f9\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " Feb 23 13:21:00.305115 master-0 kubenswrapper[26474]: I0223 13:21:00.305032 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access\") pod \"21d48679-b77a-44e4-96d0-1006527f35f9\" (UID: \"21d48679-b77a-44e4-96d0-1006527f35f9\") " Feb 23 13:21:00.306332 master-0 kubenswrapper[26474]: I0223 13:21:00.305845 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21d48679-b77a-44e4-96d0-1006527f35f9" (UID: "21d48679-b77a-44e4-96d0-1006527f35f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:21:00.306332 master-0 kubenswrapper[26474]: I0223 13:21:00.305936 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock" (OuterVolumeSpecName: "var-lock") pod "21d48679-b77a-44e4-96d0-1006527f35f9" (UID: "21d48679-b77a-44e4-96d0-1006527f35f9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:21:00.308030 master-0 kubenswrapper[26474]: I0223 13:21:00.307983 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21d48679-b77a-44e4-96d0-1006527f35f9" (UID: "21d48679-b77a-44e4-96d0-1006527f35f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:00.402614 master-0 kubenswrapper[26474]: I0223 13:21:00.402527 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3e061f9d09dab5dbaef15b3f1e67a0" path="/var/lib/kubelet/pods/5b3e061f9d09dab5dbaef15b3f1e67a0/volumes" Feb 23 13:21:00.406426 master-0 kubenswrapper[26474]: I0223 13:21:00.406309 26474 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:00.406426 master-0 kubenswrapper[26474]: I0223 13:21:00.406360 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d48679-b77a-44e4-96d0-1006527f35f9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:00.406426 master-0 kubenswrapper[26474]: I0223 13:21:00.406370 26474 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d48679-b77a-44e4-96d0-1006527f35f9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:00.893425 master-0 kubenswrapper[26474]: I0223 13:21:00.893285 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"21d48679-b77a-44e4-96d0-1006527f35f9","Type":"ContainerDied","Data":"963df20434925d58deb014f74e88566beca5320d3f7d5c726fddbc1eb1b70039"} Feb 23 13:21:00.894242 master-0 kubenswrapper[26474]: I0223 13:21:00.893451 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:21:00.894242 master-0 kubenswrapper[26474]: I0223 13:21:00.893469 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963df20434925d58deb014f74e88566beca5320d3f7d5c726fddbc1eb1b70039" Feb 23 13:21:13.393116 master-0 kubenswrapper[26474]: I0223 13:21:13.392996 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:13.415941 master-0 kubenswrapper[26474]: I0223 13:21:13.415853 26474 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="aa845721-3403-4203-9372-15a850f697cf" Feb 23 13:21:13.415941 master-0 kubenswrapper[26474]: I0223 13:21:13.415897 26474 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="aa845721-3403-4203-9372-15a850f697cf" Feb 23 13:21:13.434163 master-0 kubenswrapper[26474]: I0223 13:21:13.434079 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:21:13.437795 master-0 kubenswrapper[26474]: I0223 13:21:13.437719 26474 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:13.446245 master-0 kubenswrapper[26474]: I0223 13:21:13.446168 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:21:13.451104 master-0 kubenswrapper[26474]: I0223 13:21:13.451051 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:13.457458 master-0 kubenswrapper[26474]: I0223 13:21:13.457397 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:21:13.487123 master-0 kubenswrapper[26474]: W0223 13:21:13.487027 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58692b75be3a1359e52d345f8172ff0f.slice/crio-6f9c3b756d74c8ad3e3fc9835e023bd1f2d06854f996da1a307672e9fbddf486 WatchSource:0}: Error finding container 6f9c3b756d74c8ad3e3fc9835e023bd1f2d06854f996da1a307672e9fbddf486: Status 404 returned error can't find the container with id 6f9c3b756d74c8ad3e3fc9835e023bd1f2d06854f996da1a307672e9fbddf486 Feb 23 13:21:14.007502 master-0 kubenswrapper[26474]: I0223 13:21:14.007452 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"58692b75be3a1359e52d345f8172ff0f","Type":"ContainerStarted","Data":"bcca444cf9fefc5a69aa2ebae2ca929caf18ba2e7d11f280157e2203fd9d17d4"} Feb 23 13:21:14.007502 master-0 kubenswrapper[26474]: I0223 13:21:14.007501 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"58692b75be3a1359e52d345f8172ff0f","Type":"ContainerStarted","Data":"e6f88273646df609787ad3e429f24d7bbd63a73b5990f5e3dae9c2b57be0ac7c"} Feb 23 13:21:14.007502 master-0 kubenswrapper[26474]: I0223 13:21:14.007511 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"58692b75be3a1359e52d345f8172ff0f","Type":"ContainerStarted","Data":"6f9c3b756d74c8ad3e3fc9835e023bd1f2d06854f996da1a307672e9fbddf486"} Feb 23 13:21:15.021391 master-0 kubenswrapper[26474]: I0223 13:21:15.020912 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"58692b75be3a1359e52d345f8172ff0f","Type":"ContainerStarted","Data":"cf77f1c50d1ac4df27056dba4fc2f58e36fe062d9a32255cbcbd991e2c9a048a"} Feb 23 13:21:15.021391 master-0 kubenswrapper[26474]: I0223 13:21:15.020993 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"58692b75be3a1359e52d345f8172ff0f","Type":"ContainerStarted","Data":"7de70d75480af17bd2b3189234871163be23e2426037ee187bed3ca93ce60e98"} Feb 23 13:21:15.060456 master-0 kubenswrapper[26474]: I0223 13:21:15.060271 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.060238155 podStartE2EDuration="2.060238155s" podCreationTimestamp="2026-02-23 13:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:15.058621896 +0000 UTC m=+396.905129593" watchObservedRunningTime="2026-02-23 13:21:15.060238155 +0000 UTC m=+396.906745872" Feb 23 13:21:23.456017 master-0 kubenswrapper[26474]: I0223 13:21:23.455917 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:23.456017 master-0 kubenswrapper[26474]: I0223 13:21:23.456010 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:23.456017 master-0 kubenswrapper[26474]: I0223 13:21:23.456036 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:23.457329 master-0 kubenswrapper[26474]: I0223 13:21:23.456063 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:23.462072 master-0 kubenswrapper[26474]: I0223 13:21:23.461991 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:23.464158 master-0 kubenswrapper[26474]: I0223 13:21:23.463836 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:24.150720 master-0 kubenswrapper[26474]: I0223 13:21:24.150149 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:24.152964 master-0 kubenswrapper[26474]: I0223 13:21:24.152819 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:21:36.045548 master-0 kubenswrapper[26474]: I0223 13:21:36.045476 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-ddd484895-mpws6"] Feb 23 13:21:36.046482 master-0 kubenswrapper[26474]: E0223 13:21:36.045849 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d48679-b77a-44e4-96d0-1006527f35f9" containerName="installer" Feb 23 13:21:36.046482 master-0 kubenswrapper[26474]: I0223 13:21:36.045869 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d48679-b77a-44e4-96d0-1006527f35f9" containerName="installer" Feb 23 13:21:36.046482 master-0 kubenswrapper[26474]: I0223 13:21:36.046121 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d48679-b77a-44e4-96d0-1006527f35f9" containerName="installer" Feb 23 13:21:36.047098 master-0 kubenswrapper[26474]: I0223 13:21:36.047071 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.062386 master-0 kubenswrapper[26474]: I0223 13:21:36.062303 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-ddd484895-mpws6"] Feb 23 13:21:36.248998 master-0 kubenswrapper[26474]: I0223 13:21:36.248923 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-os-client-config\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.249202 master-0 kubenswrapper[26474]: I0223 13:21:36.249034 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-nova-console-recordings-pv\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.249364 master-0 kubenswrapper[26474]: I0223 13:21:36.249271 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhztk\" (UniqueName: \"kubernetes.io/projected/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-kube-api-access-bhztk\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.350828 master-0 kubenswrapper[26474]: I0223 13:21:36.350678 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhztk\" (UniqueName: \"kubernetes.io/projected/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-kube-api-access-bhztk\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.350828 master-0 kubenswrapper[26474]: I0223 13:21:36.350767 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-os-client-config\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.350828 master-0 kubenswrapper[26474]: I0223 13:21:36.350813 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-nova-console-recordings-pv\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.355376 master-0 kubenswrapper[26474]: I0223 13:21:36.355324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-os-client-config\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:36.377038 master-0 kubenswrapper[26474]: I0223 13:21:36.376950 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhztk\" (UniqueName: \"kubernetes.io/projected/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-kube-api-access-bhztk\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:37.090261 master-0 kubenswrapper[26474]: I0223 13:21:37.090133 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/c4ff2ce2-48a0-4b1c-83d7-63b9576116c2-nova-console-recordings-pv\") pod \"nova-console-recorder-ddd484895-mpws6\" (UID: \"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2\") " pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:37.264606 master-0 kubenswrapper[26474]: I0223 13:21:37.264541 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" Feb 23 13:21:37.826252 master-0 kubenswrapper[26474]: I0223 13:21:37.826183 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-ddd484895-mpws6"] Feb 23 13:21:38.262414 master-0 kubenswrapper[26474]: I0223 13:21:38.262297 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" event={"ID":"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2","Type":"ContainerStarted","Data":"f1a0b6a8253c9db0f63bee8b5fb858f50174e55171a0bf3e842f101e57dfe3ee"} Feb 23 13:21:39.725620 master-0 kubenswrapper[26474]: I0223 13:21:39.725526 26474 scope.go:117] "RemoveContainer" containerID="318802d3ffb8951642b7de6e2fcdce57f2f19df5bc9dbb49de74dac1fb692661" Feb 23 13:21:40.920459 master-0 kubenswrapper[26474]: I0223 13:21:40.919983 26474 scope.go:117] "RemoveContainer" containerID="6d1d2a690e1d1c47fa4cec1c840fe9083bc8bf1097a1a9a0b84ede40886e22da" Feb 23 13:21:40.939223 master-0 kubenswrapper[26474]: I0223 13:21:40.939160 26474 scope.go:117] "RemoveContainer" containerID="860ec94f783c3b653a3048dbdbe8687055c34d3047415d2575f5257d4a2f1cc0" Feb 23 13:21:40.959122 master-0 kubenswrapper[26474]: I0223 13:21:40.959080 26474 scope.go:117] "RemoveContainer" containerID="225b72ffe810de606c91db96bda704162eba140695b0d114f42ad9b5f7338027" Feb 23 13:21:46.316404 master-0 kubenswrapper[26474]: I0223 13:21:46.316234 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" event={"ID":"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2","Type":"ContainerStarted","Data":"7dc58cabba49fd0d37e807b9eacf0a0d02b4bc7fc2fafc8b9912b81c73e8e905"} Feb 23 13:21:47.328462 master-0 kubenswrapper[26474]: I0223 13:21:47.328322 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" event={"ID":"c4ff2ce2-48a0-4b1c-83d7-63b9576116c2","Type":"ContainerStarted","Data":"a18abeb3466d0656938aefde53bec936141853ab3b6fd5a21aa57ec09190295e"} Feb 23 13:22:12.573126 master-0 kubenswrapper[26474]: I0223 13:22:12.573033 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-ddd484895-mpws6" podStartSLOduration=28.816673623 podStartE2EDuration="37.573009906s" podCreationTimestamp="2026-02-23 13:21:35 +0000 UTC" firstStartedPulling="2026-02-23 13:21:37.839940838 +0000 UTC m=+419.686448555" lastFinishedPulling="2026-02-23 13:21:46.596277161 +0000 UTC m=+428.442784838" observedRunningTime="2026-02-23 13:21:47.350084317 +0000 UTC m=+429.196592104" watchObservedRunningTime="2026-02-23 13:22:12.573009906 +0000 UTC m=+454.419517583" Feb 23 13:22:12.576365 master-0 kubenswrapper[26474]: I0223 13:22:12.576283 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7"] Feb 23 13:22:12.577771 master-0 kubenswrapper[26474]: I0223 13:22:12.577735 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.579911 master-0 kubenswrapper[26474]: I0223 13:22:12.579860 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-zxmx7" Feb 23 13:22:12.597003 master-0 kubenswrapper[26474]: I0223 13:22:12.595139 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7"] Feb 23 13:22:12.610198 master-0 kubenswrapper[26474]: I0223 13:22:12.610130 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.610725 master-0 kubenswrapper[26474]: I0223 13:22:12.610592 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49tj\" (UniqueName: \"kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.610725 master-0 kubenswrapper[26474]: I0223 13:22:12.610647 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.711757 master-0 kubenswrapper[26474]: I0223 13:22:12.711692 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.711971 master-0 kubenswrapper[26474]: I0223 13:22:12.711814 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49tj\" (UniqueName: \"kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.711971 master-0 kubenswrapper[26474]: I0223 13:22:12.711847 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.712677 master-0 kubenswrapper[26474]: I0223 13:22:12.712647 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.712791 master-0 kubenswrapper[26474]: I0223 13:22:12.712740 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.731817 master-0 kubenswrapper[26474]: I0223 13:22:12.731758 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49tj\" (UniqueName: \"kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:12.893812 master-0 kubenswrapper[26474]: I0223 13:22:12.893626 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:13.342190 master-0 kubenswrapper[26474]: I0223 13:22:13.342111 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7"] Feb 23 13:22:13.555980 master-0 kubenswrapper[26474]: I0223 13:22:13.555903 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerStarted","Data":"eade6617d60c867c52bdd352b18205226baea8ab88eef6185b27d2b9abdb4253"} Feb 23 13:22:13.555980 master-0 kubenswrapper[26474]: I0223 13:22:13.555968 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerStarted","Data":"3e45996a81f1d457f65ca2f7136dd11934fd131e40b1a211e7a1744f6d601f95"} Feb 23 13:22:14.564442 master-0 kubenswrapper[26474]: I0223 13:22:14.564393 26474 generic.go:334] "Generic (PLEG): container finished" podID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerID="eade6617d60c867c52bdd352b18205226baea8ab88eef6185b27d2b9abdb4253" exitCode=0 Feb 23 13:22:14.564442 master-0 kubenswrapper[26474]: I0223 13:22:14.564442 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerDied","Data":"eade6617d60c867c52bdd352b18205226baea8ab88eef6185b27d2b9abdb4253"} Feb 23 13:22:16.583181 master-0 kubenswrapper[26474]: I0223 13:22:16.583070 26474 generic.go:334] "Generic (PLEG): container finished" podID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerID="17ec4927ee2acd6e75df7a35434c0f06cdc7b13cb53ded310179784c4821deff" exitCode=0 Feb 23 13:22:16.583181 master-0 kubenswrapper[26474]: I0223 13:22:16.583145 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerDied","Data":"17ec4927ee2acd6e75df7a35434c0f06cdc7b13cb53ded310179784c4821deff"} Feb 23 13:22:17.596772 master-0 kubenswrapper[26474]: I0223 13:22:17.596686 26474 generic.go:334] "Generic (PLEG): container finished" podID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerID="c96cc3b2a4778ae59262440e898e0d9843c0a8e86bee01e1a0ddf334d0d05088" exitCode=0 Feb 23 13:22:17.596772 master-0 kubenswrapper[26474]: I0223 13:22:17.596752 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerDied","Data":"c96cc3b2a4778ae59262440e898e0d9843c0a8e86bee01e1a0ddf334d0d05088"} Feb 23 13:22:18.957269 master-0 kubenswrapper[26474]: I0223 13:22:18.957109 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:19.026469 master-0 kubenswrapper[26474]: I0223 13:22:19.026375 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util\") pod \"bcc4760b-b370-477d-83e6-7d3b6fe01627\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " Feb 23 13:22:19.026469 master-0 kubenswrapper[26474]: I0223 13:22:19.026464 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h49tj\" (UniqueName: \"kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj\") pod \"bcc4760b-b370-477d-83e6-7d3b6fe01627\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " Feb 23 13:22:19.026911 master-0 kubenswrapper[26474]: I0223 13:22:19.026553 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle\") pod \"bcc4760b-b370-477d-83e6-7d3b6fe01627\" (UID: \"bcc4760b-b370-477d-83e6-7d3b6fe01627\") " Feb 23 13:22:19.028036 master-0 kubenswrapper[26474]: I0223 13:22:19.027924 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle" (OuterVolumeSpecName: "bundle") pod "bcc4760b-b370-477d-83e6-7d3b6fe01627" (UID: "bcc4760b-b370-477d-83e6-7d3b6fe01627"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:19.030258 master-0 kubenswrapper[26474]: I0223 13:22:19.030191 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj" (OuterVolumeSpecName: "kube-api-access-h49tj") pod "bcc4760b-b370-477d-83e6-7d3b6fe01627" (UID: "bcc4760b-b370-477d-83e6-7d3b6fe01627"). InnerVolumeSpecName "kube-api-access-h49tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:19.099692 master-0 kubenswrapper[26474]: I0223 13:22:19.099571 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util" (OuterVolumeSpecName: "util") pod "bcc4760b-b370-477d-83e6-7d3b6fe01627" (UID: "bcc4760b-b370-477d-83e6-7d3b6fe01627"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:19.127791 master-0 kubenswrapper[26474]: I0223 13:22:19.127663 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:19.127791 master-0 kubenswrapper[26474]: I0223 13:22:19.127702 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcc4760b-b370-477d-83e6-7d3b6fe01627-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:19.127791 master-0 kubenswrapper[26474]: I0223 13:22:19.127712 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h49tj\" (UniqueName: \"kubernetes.io/projected/bcc4760b-b370-477d-83e6-7d3b6fe01627-kube-api-access-h49tj\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:19.613363 master-0 kubenswrapper[26474]: I0223 13:22:19.613293 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" event={"ID":"bcc4760b-b370-477d-83e6-7d3b6fe01627","Type":"ContainerDied","Data":"3e45996a81f1d457f65ca2f7136dd11934fd131e40b1a211e7a1744f6d601f95"} Feb 23 13:22:19.613363 master-0 kubenswrapper[26474]: I0223 13:22:19.613361 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e45996a81f1d457f65ca2f7136dd11934fd131e40b1a211e7a1744f6d601f95" Feb 23 13:22:19.613674 master-0 kubenswrapper[26474]: I0223 13:22:19.613389 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4j76f7" Feb 23 13:22:26.013872 master-0 kubenswrapper[26474]: I0223 13:22:26.013786 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-6769464f96-6d5wq"] Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: E0223 13:22:26.014486 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="util" Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: I0223 13:22:26.014534 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="util" Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: E0223 13:22:26.014575 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="extract" Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: I0223 13:22:26.014609 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="extract" Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: E0223 13:22:26.014619 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="pull" Feb 23 13:22:26.014843 master-0 kubenswrapper[26474]: I0223 13:22:26.014627 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="pull" Feb 23 13:22:26.015184 master-0 kubenswrapper[26474]: I0223 13:22:26.015139 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc4760b-b370-477d-83e6-7d3b6fe01627" containerName="extract" Feb 23 13:22:26.015863 master-0 kubenswrapper[26474]: I0223 13:22:26.015830 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.018434 master-0 kubenswrapper[26474]: I0223 13:22:26.018368 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 23 13:22:26.018434 master-0 kubenswrapper[26474]: I0223 13:22:26.018404 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 23 13:22:26.018748 master-0 kubenswrapper[26474]: I0223 13:22:26.018709 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 23 13:22:26.020124 master-0 kubenswrapper[26474]: I0223 13:22:26.020064 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 23 13:22:26.020504 master-0 kubenswrapper[26474]: I0223 13:22:26.020464 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 23 13:22:26.042457 master-0 kubenswrapper[26474]: I0223 13:22:26.042384 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6769464f96-6d5wq"] Feb 23 13:22:26.112641 master-0 kubenswrapper[26474]: I0223 13:22:26.112563 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-metrics-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.112885 master-0 kubenswrapper[26474]: I0223 13:22:26.112759 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-apiservice-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.113251 master-0 kubenswrapper[26474]: I0223 13:22:26.113195 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/563bf87d-c6d5-4307-8a7d-69c8b5843aad-socket-dir\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.113524 master-0 kubenswrapper[26474]: I0223 13:22:26.113477 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcbr\" (UniqueName: \"kubernetes.io/projected/563bf87d-c6d5-4307-8a7d-69c8b5843aad-kube-api-access-dgcbr\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.113754 master-0 kubenswrapper[26474]: I0223 13:22:26.113701 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-webhook-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.216012 master-0 kubenswrapper[26474]: I0223 13:22:26.215933 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-webhook-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.216225 master-0 kubenswrapper[26474]: I0223 13:22:26.216186 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-metrics-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.216279 master-0 kubenswrapper[26474]: I0223 13:22:26.216232 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-apiservice-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.216577 master-0 kubenswrapper[26474]: I0223 13:22:26.216536 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/563bf87d-c6d5-4307-8a7d-69c8b5843aad-socket-dir\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.216646 master-0 kubenswrapper[26474]: I0223 13:22:26.216615 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcbr\" (UniqueName: \"kubernetes.io/projected/563bf87d-c6d5-4307-8a7d-69c8b5843aad-kube-api-access-dgcbr\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.218294 master-0 kubenswrapper[26474]: I0223 13:22:26.217679 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/563bf87d-c6d5-4307-8a7d-69c8b5843aad-socket-dir\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.220436 master-0 kubenswrapper[26474]: I0223 13:22:26.220365 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-metrics-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.229104 master-0 kubenswrapper[26474]: I0223 13:22:26.229016 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-apiservice-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.229484 master-0 kubenswrapper[26474]: I0223 13:22:26.229431 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/563bf87d-c6d5-4307-8a7d-69c8b5843aad-webhook-cert\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.251020 master-0 kubenswrapper[26474]: I0223 13:22:26.250911 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcbr\" (UniqueName: \"kubernetes.io/projected/563bf87d-c6d5-4307-8a7d-69c8b5843aad-kube-api-access-dgcbr\") pod \"lvms-operator-6769464f96-6d5wq\" (UID: \"563bf87d-c6d5-4307-8a7d-69c8b5843aad\") " pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.342452 master-0 kubenswrapper[26474]: I0223 13:22:26.342308 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:26.740186 master-0 kubenswrapper[26474]: I0223 13:22:26.740132 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6769464f96-6d5wq"] Feb 23 13:22:26.744734 master-0 kubenswrapper[26474]: W0223 13:22:26.744550 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563bf87d_c6d5_4307_8a7d_69c8b5843aad.slice/crio-4085a34b63eeeeff1bf85f3f0c09be8050edab29e44c674366e53d9ad1d9aa9f WatchSource:0}: Error finding container 4085a34b63eeeeff1bf85f3f0c09be8050edab29e44c674366e53d9ad1d9aa9f: Status 404 returned error can't find the container with id 4085a34b63eeeeff1bf85f3f0c09be8050edab29e44c674366e53d9ad1d9aa9f Feb 23 13:22:27.673376 master-0 kubenswrapper[26474]: I0223 13:22:27.673265 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" event={"ID":"563bf87d-c6d5-4307-8a7d-69c8b5843aad","Type":"ContainerStarted","Data":"4085a34b63eeeeff1bf85f3f0c09be8050edab29e44c674366e53d9ad1d9aa9f"} Feb 23 13:22:31.714981 master-0 kubenswrapper[26474]: I0223 13:22:31.714896 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" event={"ID":"563bf87d-c6d5-4307-8a7d-69c8b5843aad","Type":"ContainerStarted","Data":"1a3c0f988559e6c5b5e5463f298e91c87b88d8648d680880ad675e0028baa72d"} Feb 23 13:22:31.715674 master-0 kubenswrapper[26474]: I0223 13:22:31.715198 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:31.753823 master-0 kubenswrapper[26474]: I0223 13:22:31.753704 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" podStartSLOduration=2.404330666 podStartE2EDuration="6.753685443s" podCreationTimestamp="2026-02-23 13:22:25 +0000 UTC" firstStartedPulling="2026-02-23 13:22:26.746775205 +0000 UTC m=+468.593282882" lastFinishedPulling="2026-02-23 13:22:31.096129942 +0000 UTC m=+472.942637659" observedRunningTime="2026-02-23 13:22:31.740525315 +0000 UTC m=+473.587033042" watchObservedRunningTime="2026-02-23 13:22:31.753685443 +0000 UTC m=+473.600193120" Feb 23 13:22:32.728987 master-0 kubenswrapper[26474]: I0223 13:22:32.728914 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-6769464f96-6d5wq" Feb 23 13:22:36.591677 master-0 kubenswrapper[26474]: I0223 13:22:36.591603 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj"] Feb 23 13:22:36.593052 master-0 kubenswrapper[26474]: I0223 13:22:36.593021 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.596786 master-0 kubenswrapper[26474]: I0223 13:22:36.596665 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-zxmx7" Feb 23 13:22:36.611673 master-0 kubenswrapper[26474]: I0223 13:22:36.611624 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj"] Feb 23 13:22:36.703825 master-0 kubenswrapper[26474]: I0223 13:22:36.703595 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lml4\" (UniqueName: \"kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.703825 master-0 kubenswrapper[26474]: I0223 13:22:36.703690 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.703825 master-0 kubenswrapper[26474]: I0223 13:22:36.703732 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.805170 master-0 kubenswrapper[26474]: I0223 13:22:36.805088 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.805406 master-0 kubenswrapper[26474]: I0223 13:22:36.805281 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lml4\" (UniqueName: \"kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.805406 master-0 kubenswrapper[26474]: I0223 13:22:36.805393 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.805708 master-0 kubenswrapper[26474]: I0223 13:22:36.805662 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.806092 master-0 kubenswrapper[26474]: I0223 13:22:36.806050 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.822689 master-0 kubenswrapper[26474]: I0223 13:22:36.822623 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lml4\" (UniqueName: \"kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:36.914124 master-0 kubenswrapper[26474]: I0223 13:22:36.913930 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:37.172667 master-0 kubenswrapper[26474]: I0223 13:22:37.172523 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk"] Feb 23 13:22:37.174237 master-0 kubenswrapper[26474]: I0223 13:22:37.174194 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.181022 master-0 kubenswrapper[26474]: I0223 13:22:37.180935 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk"] Feb 23 13:22:37.312404 master-0 kubenswrapper[26474]: I0223 13:22:37.312320 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdv9d\" (UniqueName: \"kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.312630 master-0 kubenswrapper[26474]: I0223 13:22:37.312417 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.312630 master-0 kubenswrapper[26474]: I0223 13:22:37.312484 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.414787 master-0 kubenswrapper[26474]: I0223 13:22:37.414653 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdv9d\" (UniqueName: \"kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.414891 master-0 kubenswrapper[26474]: I0223 13:22:37.414851 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.415147 master-0 kubenswrapper[26474]: I0223 13:22:37.415104 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.415740 master-0 kubenswrapper[26474]: I0223 13:22:37.415684 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.416864 master-0 kubenswrapper[26474]: I0223 13:22:37.416671 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.417569 master-0 kubenswrapper[26474]: W0223 13:22:37.417506 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5767294_6282_42d2_996a_d9fd02669752.slice/crio-e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e WatchSource:0}: Error finding container e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e: Status 404 returned error can't find the container with id e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e Feb 23 13:22:37.418482 master-0 kubenswrapper[26474]: I0223 13:22:37.418446 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj"] Feb 23 13:22:37.432520 master-0 kubenswrapper[26474]: I0223 13:22:37.432405 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdv9d\" (UniqueName: \"kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.489315 master-0 kubenswrapper[26474]: I0223 13:22:37.489240 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:37.770668 master-0 kubenswrapper[26474]: I0223 13:22:37.770553 26474 generic.go:334] "Generic (PLEG): container finished" podID="a5767294-6282-42d2-996a-d9fd02669752" containerID="cbbdc7549055e80f992d3886b37c81ee32ab5b5479834cb4e580babef94d0841" exitCode=0 Feb 23 13:22:37.771190 master-0 kubenswrapper[26474]: I0223 13:22:37.770666 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" event={"ID":"a5767294-6282-42d2-996a-d9fd02669752","Type":"ContainerDied","Data":"cbbdc7549055e80f992d3886b37c81ee32ab5b5479834cb4e580babef94d0841"} Feb 23 13:22:37.771190 master-0 kubenswrapper[26474]: I0223 13:22:37.770751 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" event={"ID":"a5767294-6282-42d2-996a-d9fd02669752","Type":"ContainerStarted","Data":"e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e"} Feb 23 13:22:37.926007 master-0 kubenswrapper[26474]: I0223 13:22:37.925964 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk"] Feb 23 13:22:37.926815 master-0 kubenswrapper[26474]: W0223 13:22:37.926770 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod786df1c5_f5f9_4f2d_9cd4_f2c1a7e6fbd7.slice/crio-f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25 WatchSource:0}: Error finding container f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25: Status 404 returned error can't find the container with id f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25 Feb 23 13:22:38.782667 master-0 kubenswrapper[26474]: I0223 13:22:38.782592 26474 generic.go:334] "Generic (PLEG): container finished" podID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerID="759b6cb9f1ca9073a2243f4a7999555eaab13bc7b09d08bf9194b19301358212" exitCode=0 Feb 23 13:22:38.782667 master-0 kubenswrapper[26474]: I0223 13:22:38.782668 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" event={"ID":"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7","Type":"ContainerDied","Data":"759b6cb9f1ca9073a2243f4a7999555eaab13bc7b09d08bf9194b19301358212"} Feb 23 13:22:38.783186 master-0 kubenswrapper[26474]: I0223 13:22:38.782704 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" event={"ID":"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7","Type":"ContainerStarted","Data":"f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25"} Feb 23 13:22:38.964938 master-0 kubenswrapper[26474]: I0223 13:22:38.964801 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8"] Feb 23 13:22:38.966470 master-0 kubenswrapper[26474]: I0223 13:22:38.966399 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:38.981121 master-0 kubenswrapper[26474]: I0223 13:22:38.981067 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8"] Feb 23 13:22:39.044058 master-0 kubenswrapper[26474]: I0223 13:22:39.043922 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.044058 master-0 kubenswrapper[26474]: I0223 13:22:39.044019 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ntj\" (UniqueName: \"kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.044272 master-0 kubenswrapper[26474]: I0223 13:22:39.044090 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.145438 master-0 kubenswrapper[26474]: I0223 13:22:39.145380 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58ntj\" (UniqueName: \"kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.145643 master-0 kubenswrapper[26474]: I0223 13:22:39.145477 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.145756 master-0 kubenswrapper[26474]: I0223 13:22:39.145727 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.146908 master-0 kubenswrapper[26474]: I0223 13:22:39.146779 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.146908 master-0 kubenswrapper[26474]: I0223 13:22:39.146836 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.163189 master-0 kubenswrapper[26474]: I0223 13:22:39.163145 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58ntj\" (UniqueName: \"kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.298683 master-0 kubenswrapper[26474]: I0223 13:22:39.298543 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:39.697401 master-0 kubenswrapper[26474]: I0223 13:22:39.696730 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8"] Feb 23 13:22:39.701703 master-0 kubenswrapper[26474]: W0223 13:22:39.701644 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f6a151_9c5e_4456_a8dd_dbaf757a91d0.slice/crio-85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579 WatchSource:0}: Error finding container 85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579: Status 404 returned error can't find the container with id 85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579 Feb 23 13:22:39.792192 master-0 kubenswrapper[26474]: I0223 13:22:39.792130 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" event={"ID":"69f6a151-9c5e-4456-a8dd-dbaf757a91d0","Type":"ContainerStarted","Data":"85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579"} Feb 23 13:22:40.801045 master-0 kubenswrapper[26474]: I0223 13:22:40.800990 26474 generic.go:334] "Generic (PLEG): container finished" podID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerID="aab193b31cf8de57f8eabb3410023d97908cea1fb2726e57a97e2aed4123edcd" exitCode=0 Feb 23 13:22:40.801045 master-0 kubenswrapper[26474]: I0223 13:22:40.801066 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" event={"ID":"69f6a151-9c5e-4456-a8dd-dbaf757a91d0","Type":"ContainerDied","Data":"aab193b31cf8de57f8eabb3410023d97908cea1fb2726e57a97e2aed4123edcd"} Feb 23 13:22:40.809721 master-0 kubenswrapper[26474]: I0223 13:22:40.809644 26474 generic.go:334] "Generic (PLEG): container finished" podID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerID="0e5fe78efa714dede68ee570c5563104c85800f1a130d19034646181f4312ec6" exitCode=0 Feb 23 13:22:40.809852 master-0 kubenswrapper[26474]: I0223 13:22:40.809733 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" event={"ID":"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7","Type":"ContainerDied","Data":"0e5fe78efa714dede68ee570c5563104c85800f1a130d19034646181f4312ec6"} Feb 23 13:22:41.819069 master-0 kubenswrapper[26474]: I0223 13:22:41.819015 26474 generic.go:334] "Generic (PLEG): container finished" podID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerID="fcaeae78ae75a47254ac251eeaf9de2d20a98c79acea389471e99a184be207f6" exitCode=0 Feb 23 13:22:41.819632 master-0 kubenswrapper[26474]: I0223 13:22:41.819087 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" event={"ID":"69f6a151-9c5e-4456-a8dd-dbaf757a91d0","Type":"ContainerDied","Data":"fcaeae78ae75a47254ac251eeaf9de2d20a98c79acea389471e99a184be207f6"} Feb 23 13:22:41.823018 master-0 kubenswrapper[26474]: I0223 13:22:41.822973 26474 generic.go:334] "Generic (PLEG): container finished" podID="a5767294-6282-42d2-996a-d9fd02669752" containerID="58d43fc2288519260f6cd470489ad97f5bd7b80760d80c97d20492eb018ef753" exitCode=0 Feb 23 13:22:41.823114 master-0 kubenswrapper[26474]: I0223 13:22:41.823061 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" event={"ID":"a5767294-6282-42d2-996a-d9fd02669752","Type":"ContainerDied","Data":"58d43fc2288519260f6cd470489ad97f5bd7b80760d80c97d20492eb018ef753"} Feb 23 13:22:41.826487 master-0 kubenswrapper[26474]: I0223 13:22:41.826437 26474 generic.go:334] "Generic (PLEG): container finished" podID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerID="9d3be2698aad0619474cb763c1dbce6442389a990e47e965b8fe370317c1b70b" exitCode=0 Feb 23 13:22:41.826652 master-0 kubenswrapper[26474]: I0223 13:22:41.826502 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" event={"ID":"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7","Type":"ContainerDied","Data":"9d3be2698aad0619474cb763c1dbce6442389a990e47e965b8fe370317c1b70b"} Feb 23 13:22:42.839402 master-0 kubenswrapper[26474]: I0223 13:22:42.839275 26474 generic.go:334] "Generic (PLEG): container finished" podID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerID="d38cc832c775848d32338c8732dd88c1c9e46a77516443078343c1cfb125abe8" exitCode=0 Feb 23 13:22:42.839402 master-0 kubenswrapper[26474]: I0223 13:22:42.839387 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" event={"ID":"69f6a151-9c5e-4456-a8dd-dbaf757a91d0","Type":"ContainerDied","Data":"d38cc832c775848d32338c8732dd88c1c9e46a77516443078343c1cfb125abe8"} Feb 23 13:22:42.845094 master-0 kubenswrapper[26474]: I0223 13:22:42.845045 26474 generic.go:334] "Generic (PLEG): container finished" podID="a5767294-6282-42d2-996a-d9fd02669752" containerID="417ccedc2f2a7fa9eb0d66007f8089f89a70f880438b3d2d7ebb7e61121d44ad" exitCode=0 Feb 23 13:22:42.845365 master-0 kubenswrapper[26474]: I0223 13:22:42.845097 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" event={"ID":"a5767294-6282-42d2-996a-d9fd02669752","Type":"ContainerDied","Data":"417ccedc2f2a7fa9eb0d66007f8089f89a70f880438b3d2d7ebb7e61121d44ad"} Feb 23 13:22:43.301531 master-0 kubenswrapper[26474]: I0223 13:22:43.301465 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:43.431432 master-0 kubenswrapper[26474]: I0223 13:22:43.431377 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdv9d\" (UniqueName: \"kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d\") pod \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " Feb 23 13:22:43.431764 master-0 kubenswrapper[26474]: I0223 13:22:43.431455 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle\") pod \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " Feb 23 13:22:43.431764 master-0 kubenswrapper[26474]: I0223 13:22:43.431594 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util\") pod \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\" (UID: \"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7\") " Feb 23 13:22:43.432870 master-0 kubenswrapper[26474]: I0223 13:22:43.432793 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle" (OuterVolumeSpecName: "bundle") pod "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" (UID: "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:43.438556 master-0 kubenswrapper[26474]: I0223 13:22:43.438507 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d" (OuterVolumeSpecName: "kube-api-access-tdv9d") pod "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" (UID: "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7"). InnerVolumeSpecName "kube-api-access-tdv9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:43.447705 master-0 kubenswrapper[26474]: I0223 13:22:43.447637 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util" (OuterVolumeSpecName: "util") pod "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" (UID: "786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:43.535215 master-0 kubenswrapper[26474]: I0223 13:22:43.535122 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdv9d\" (UniqueName: \"kubernetes.io/projected/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-kube-api-access-tdv9d\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:43.535215 master-0 kubenswrapper[26474]: I0223 13:22:43.535190 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:43.535215 master-0 kubenswrapper[26474]: I0223 13:22:43.535209 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:43.854931 master-0 kubenswrapper[26474]: I0223 13:22:43.854813 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" event={"ID":"786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7","Type":"ContainerDied","Data":"f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25"} Feb 23 13:22:43.854931 master-0 kubenswrapper[26474]: I0223 13:22:43.854865 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b44e5d010b91d5e4d8fe0c8bb1f59574635e19831cb0ee6709959b18aa1c25" Feb 23 13:22:43.854931 master-0 kubenswrapper[26474]: I0223 13:22:43.854885 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213f42nk" Feb 23 13:22:44.274754 master-0 kubenswrapper[26474]: I0223 13:22:44.274665 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:44.282636 master-0 kubenswrapper[26474]: I0223 13:22:44.281285 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.352775 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util\") pod \"a5767294-6282-42d2-996a-d9fd02669752\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.353272 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util\") pod \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.353410 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lml4\" (UniqueName: \"kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4\") pod \"a5767294-6282-42d2-996a-d9fd02669752\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.353511 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58ntj\" (UniqueName: \"kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj\") pod \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.353687 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle\") pod \"a5767294-6282-42d2-996a-d9fd02669752\" (UID: \"a5767294-6282-42d2-996a-d9fd02669752\") " Feb 23 13:22:44.355306 master-0 kubenswrapper[26474]: I0223 13:22:44.353762 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle\") pod \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\" (UID: \"69f6a151-9c5e-4456-a8dd-dbaf757a91d0\") " Feb 23 13:22:44.359644 master-0 kubenswrapper[26474]: I0223 13:22:44.355947 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle" (OuterVolumeSpecName: "bundle") pod "69f6a151-9c5e-4456-a8dd-dbaf757a91d0" (UID: "69f6a151-9c5e-4456-a8dd-dbaf757a91d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:44.359644 master-0 kubenswrapper[26474]: I0223 13:22:44.359305 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle" (OuterVolumeSpecName: "bundle") pod "a5767294-6282-42d2-996a-d9fd02669752" (UID: "a5767294-6282-42d2-996a-d9fd02669752"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:44.361722 master-0 kubenswrapper[26474]: I0223 13:22:44.361194 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj" (OuterVolumeSpecName: "kube-api-access-58ntj") pod "69f6a151-9c5e-4456-a8dd-dbaf757a91d0" (UID: "69f6a151-9c5e-4456-a8dd-dbaf757a91d0"). InnerVolumeSpecName "kube-api-access-58ntj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:44.361722 master-0 kubenswrapper[26474]: I0223 13:22:44.361523 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4" (OuterVolumeSpecName: "kube-api-access-6lml4") pod "a5767294-6282-42d2-996a-d9fd02669752" (UID: "a5767294-6282-42d2-996a-d9fd02669752"). InnerVolumeSpecName "kube-api-access-6lml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:44.378718 master-0 kubenswrapper[26474]: I0223 13:22:44.378612 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util" (OuterVolumeSpecName: "util") pod "a5767294-6282-42d2-996a-d9fd02669752" (UID: "a5767294-6282-42d2-996a-d9fd02669752"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:44.382000 master-0 kubenswrapper[26474]: I0223 13:22:44.381938 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util" (OuterVolumeSpecName: "util") pod "69f6a151-9c5e-4456-a8dd-dbaf757a91d0" (UID: "69f6a151-9c5e-4456-a8dd-dbaf757a91d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456609 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456690 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456710 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5767294-6282-42d2-996a-d9fd02669752-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456731 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456787 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lml4\" (UniqueName: \"kubernetes.io/projected/a5767294-6282-42d2-996a-d9fd02669752-kube-api-access-6lml4\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.456862 master-0 kubenswrapper[26474]: I0223 13:22:44.456856 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58ntj\" (UniqueName: \"kubernetes.io/projected/69f6a151-9c5e-4456-a8dd-dbaf757a91d0-kube-api-access-58ntj\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:44.866764 master-0 kubenswrapper[26474]: I0223 13:22:44.866703 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" event={"ID":"69f6a151-9c5e-4456-a8dd-dbaf757a91d0","Type":"ContainerDied","Data":"85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579"} Feb 23 13:22:44.866764 master-0 kubenswrapper[26474]: I0223 13:22:44.866758 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85db7ef670ceada15bebeb107d00d9d573714a273cec83f8a3ff373295cb1579" Feb 23 13:22:44.867351 master-0 kubenswrapper[26474]: I0223 13:22:44.866832 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat6xv8" Feb 23 13:22:44.870072 master-0 kubenswrapper[26474]: I0223 13:22:44.870040 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" event={"ID":"a5767294-6282-42d2-996a-d9fd02669752","Type":"ContainerDied","Data":"e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e"} Feb 23 13:22:44.870072 master-0 kubenswrapper[26474]: I0223 13:22:44.870071 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e15a5ccf047fc703cc1e6dc8a7a9160b136bb6549d2735531dddf43786a0064e" Feb 23 13:22:44.870229 master-0 kubenswrapper[26474]: I0223 13:22:44.870161 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v45zj" Feb 23 13:22:46.165897 master-0 kubenswrapper[26474]: I0223 13:22:46.165842 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6"] Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166138 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166151 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166163 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166169 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166178 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166185 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166199 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166205 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166214 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166220 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166233 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166239 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166251 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166257 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="extract" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166267 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166273 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="pull" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: E0223 13:22:46.166289 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166296 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="util" Feb 23 13:22:46.166475 master-0 kubenswrapper[26474]: I0223 13:22:46.166473 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f6a151-9c5e-4456-a8dd-dbaf757a91d0" containerName="extract" Feb 23 13:22:46.167087 master-0 kubenswrapper[26474]: I0223 13:22:46.166500 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="786df1c5-f5f9-4f2d-9cd4-f2c1a7e6fbd7" containerName="extract" Feb 23 13:22:46.167087 master-0 kubenswrapper[26474]: I0223 13:22:46.166545 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5767294-6282-42d2-996a-d9fd02669752" containerName="extract" Feb 23 13:22:46.167527 master-0 kubenswrapper[26474]: I0223 13:22:46.167502 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.170381 master-0 kubenswrapper[26474]: I0223 13:22:46.170331 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-zxmx7" Feb 23 13:22:46.187937 master-0 kubenswrapper[26474]: I0223 13:22:46.187882 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6"] Feb 23 13:22:46.289287 master-0 kubenswrapper[26474]: I0223 13:22:46.289221 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.289521 master-0 kubenswrapper[26474]: I0223 13:22:46.289295 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.289521 master-0 kubenswrapper[26474]: I0223 13:22:46.289353 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.391742 master-0 kubenswrapper[26474]: I0223 13:22:46.391657 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.391742 master-0 kubenswrapper[26474]: I0223 13:22:46.391733 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.391995 master-0 kubenswrapper[26474]: I0223 13:22:46.391777 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.392350 master-0 kubenswrapper[26474]: I0223 13:22:46.392286 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.392402 master-0 kubenswrapper[26474]: I0223 13:22:46.392321 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.407903 master-0 kubenswrapper[26474]: I0223 13:22:46.407854 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.516991 master-0 kubenswrapper[26474]: I0223 13:22:46.516916 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:46.900301 master-0 kubenswrapper[26474]: I0223 13:22:46.900236 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6"] Feb 23 13:22:46.907011 master-0 kubenswrapper[26474]: W0223 13:22:46.906952 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod986db700_dfe2_46d3_9138_b10b0f28e2c1.slice/crio-42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0 WatchSource:0}: Error finding container 42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0: Status 404 returned error can't find the container with id 42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0 Feb 23 13:22:47.901180 master-0 kubenswrapper[26474]: I0223 13:22:47.901124 26474 generic.go:334] "Generic (PLEG): container finished" podID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerID="b929d9badd52e1106c1f4bf61f8fc38f9e50cb0a65df1fee296badd79b646e6f" exitCode=0 Feb 23 13:22:47.901880 master-0 kubenswrapper[26474]: I0223 13:22:47.901831 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" event={"ID":"986db700-dfe2-46d3-9138-b10b0f28e2c1","Type":"ContainerDied","Data":"b929d9badd52e1106c1f4bf61f8fc38f9e50cb0a65df1fee296badd79b646e6f"} Feb 23 13:22:47.901998 master-0 kubenswrapper[26474]: I0223 13:22:47.901978 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" event={"ID":"986db700-dfe2-46d3-9138-b10b0f28e2c1","Type":"ContainerStarted","Data":"42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0"} Feb 23 13:22:49.919023 master-0 kubenswrapper[26474]: I0223 13:22:49.918946 26474 generic.go:334] "Generic (PLEG): container finished" podID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerID="f22a62f9dc6d039d4467031c3ab2252f23746c9db33a836f6bb08708e3634fbc" exitCode=0 Feb 23 13:22:49.919023 master-0 kubenswrapper[26474]: I0223 13:22:49.919005 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" event={"ID":"986db700-dfe2-46d3-9138-b10b0f28e2c1","Type":"ContainerDied","Data":"f22a62f9dc6d039d4467031c3ab2252f23746c9db33a836f6bb08708e3634fbc"} Feb 23 13:22:50.696685 master-0 kubenswrapper[26474]: I0223 13:22:50.696525 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr"] Feb 23 13:22:50.697703 master-0 kubenswrapper[26474]: I0223 13:22:50.697666 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.702980 master-0 kubenswrapper[26474]: I0223 13:22:50.702162 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 23 13:22:50.702980 master-0 kubenswrapper[26474]: I0223 13:22:50.702610 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 23 13:22:50.714730 master-0 kubenswrapper[26474]: I0223 13:22:50.714662 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr"] Feb 23 13:22:50.766040 master-0 kubenswrapper[26474]: I0223 13:22:50.765564 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qksqf\" (UniqueName: \"kubernetes.io/projected/573cc5fd-dc57-444f-bb45-d7ec27c624d1-kube-api-access-qksqf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.766040 master-0 kubenswrapper[26474]: I0223 13:22:50.765719 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/573cc5fd-dc57-444f-bb45-d7ec27c624d1-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.867607 master-0 kubenswrapper[26474]: I0223 13:22:50.867515 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qksqf\" (UniqueName: \"kubernetes.io/projected/573cc5fd-dc57-444f-bb45-d7ec27c624d1-kube-api-access-qksqf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.867979 master-0 kubenswrapper[26474]: I0223 13:22:50.867682 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/573cc5fd-dc57-444f-bb45-d7ec27c624d1-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.868794 master-0 kubenswrapper[26474]: I0223 13:22:50.868518 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/573cc5fd-dc57-444f-bb45-d7ec27c624d1-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.899380 master-0 kubenswrapper[26474]: I0223 13:22:50.894694 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qksqf\" (UniqueName: \"kubernetes.io/projected/573cc5fd-dc57-444f-bb45-d7ec27c624d1-kube-api-access-qksqf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ssnlr\" (UID: \"573cc5fd-dc57-444f-bb45-d7ec27c624d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:50.930717 master-0 kubenswrapper[26474]: I0223 13:22:50.930651 26474 generic.go:334] "Generic (PLEG): container finished" podID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerID="2469851a0847cff7b4d6e8af25ce7a746bde23e95a552ee969ce6b9e6c272049" exitCode=0 Feb 23 13:22:50.930717 master-0 kubenswrapper[26474]: I0223 13:22:50.930712 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" event={"ID":"986db700-dfe2-46d3-9138-b10b0f28e2c1","Type":"ContainerDied","Data":"2469851a0847cff7b4d6e8af25ce7a746bde23e95a552ee969ce6b9e6c272049"} Feb 23 13:22:51.017367 master-0 kubenswrapper[26474]: I0223 13:22:51.015716 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" Feb 23 13:22:51.618491 master-0 kubenswrapper[26474]: W0223 13:22:51.618433 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod573cc5fd_dc57_444f_bb45_d7ec27c624d1.slice/crio-ded59b4fad4c706a5cf6fb29607dab91d8807672b5169071576b9696f16cb27d WatchSource:0}: Error finding container ded59b4fad4c706a5cf6fb29607dab91d8807672b5169071576b9696f16cb27d: Status 404 returned error can't find the container with id ded59b4fad4c706a5cf6fb29607dab91d8807672b5169071576b9696f16cb27d Feb 23 13:22:51.625796 master-0 kubenswrapper[26474]: I0223 13:22:51.625758 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr"] Feb 23 13:22:51.940277 master-0 kubenswrapper[26474]: I0223 13:22:51.940021 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" event={"ID":"573cc5fd-dc57-444f-bb45-d7ec27c624d1","Type":"ContainerStarted","Data":"ded59b4fad4c706a5cf6fb29607dab91d8807672b5169071576b9696f16cb27d"} Feb 23 13:22:52.305323 master-0 kubenswrapper[26474]: I0223 13:22:52.305246 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:52.400081 master-0 kubenswrapper[26474]: I0223 13:22:52.399996 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle\") pod \"986db700-dfe2-46d3-9138-b10b0f28e2c1\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " Feb 23 13:22:52.400329 master-0 kubenswrapper[26474]: I0223 13:22:52.400123 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util\") pod \"986db700-dfe2-46d3-9138-b10b0f28e2c1\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " Feb 23 13:22:52.400329 master-0 kubenswrapper[26474]: I0223 13:22:52.400234 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685\") pod \"986db700-dfe2-46d3-9138-b10b0f28e2c1\" (UID: \"986db700-dfe2-46d3-9138-b10b0f28e2c1\") " Feb 23 13:22:52.405432 master-0 kubenswrapper[26474]: I0223 13:22:52.404616 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle" (OuterVolumeSpecName: "bundle") pod "986db700-dfe2-46d3-9138-b10b0f28e2c1" (UID: "986db700-dfe2-46d3-9138-b10b0f28e2c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:52.407865 master-0 kubenswrapper[26474]: I0223 13:22:52.407815 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685" (OuterVolumeSpecName: "kube-api-access-zb685") pod "986db700-dfe2-46d3-9138-b10b0f28e2c1" (UID: "986db700-dfe2-46d3-9138-b10b0f28e2c1"). InnerVolumeSpecName "kube-api-access-zb685". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:52.434234 master-0 kubenswrapper[26474]: I0223 13:22:52.434139 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util" (OuterVolumeSpecName: "util") pod "986db700-dfe2-46d3-9138-b10b0f28e2c1" (UID: "986db700-dfe2-46d3-9138-b10b0f28e2c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:52.520366 master-0 kubenswrapper[26474]: I0223 13:22:52.507793 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:52.520366 master-0 kubenswrapper[26474]: I0223 13:22:52.507835 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb685\" (UniqueName: \"kubernetes.io/projected/986db700-dfe2-46d3-9138-b10b0f28e2c1-kube-api-access-zb685\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:52.520366 master-0 kubenswrapper[26474]: I0223 13:22:52.507848 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986db700-dfe2-46d3-9138-b10b0f28e2c1-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:52.949954 master-0 kubenswrapper[26474]: I0223 13:22:52.949806 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" event={"ID":"986db700-dfe2-46d3-9138-b10b0f28e2c1","Type":"ContainerDied","Data":"42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0"} Feb 23 13:22:52.949954 master-0 kubenswrapper[26474]: I0223 13:22:52.949866 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42efc19117af7aadfaeb1b18845b3b5e87915ba4fd0845c4c51ef75b1dfe92b0" Feb 23 13:22:52.949954 master-0 kubenswrapper[26474]: I0223 13:22:52.949930 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vbz6" Feb 23 13:22:55.983395 master-0 kubenswrapper[26474]: I0223 13:22:55.983300 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" event={"ID":"573cc5fd-dc57-444f-bb45-d7ec27c624d1","Type":"ContainerStarted","Data":"acae6a928a1b8d970cbdf6ab03723dc630c08a3c4578491fa4819ad227497470"} Feb 23 13:22:56.016349 master-0 kubenswrapper[26474]: I0223 13:22:56.016229 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ssnlr" podStartSLOduration=2.513649652 podStartE2EDuration="6.016204017s" podCreationTimestamp="2026-02-23 13:22:50 +0000 UTC" firstStartedPulling="2026-02-23 13:22:51.622282794 +0000 UTC m=+493.468790471" lastFinishedPulling="2026-02-23 13:22:55.124837159 +0000 UTC m=+496.971344836" observedRunningTime="2026-02-23 13:22:56.01052572 +0000 UTC m=+497.857033457" watchObservedRunningTime="2026-02-23 13:22:56.016204017 +0000 UTC m=+497.862711724" Feb 23 13:23:02.065850 master-0 kubenswrapper[26474]: I0223 13:23:02.065759 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6v58v"] Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: E0223 13:23:02.066067 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="extract" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: I0223 13:23:02.066079 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="extract" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: E0223 13:23:02.066101 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="pull" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: I0223 13:23:02.066108 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="pull" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: E0223 13:23:02.066141 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="util" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: I0223 13:23:02.066148 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="util" Feb 23 13:23:02.066739 master-0 kubenswrapper[26474]: I0223 13:23:02.066295 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="986db700-dfe2-46d3-9138-b10b0f28e2c1" containerName="extract" Feb 23 13:23:02.067167 master-0 kubenswrapper[26474]: I0223 13:23:02.066760 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.072846 master-0 kubenswrapper[26474]: I0223 13:23:02.072760 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 13:23:02.082531 master-0 kubenswrapper[26474]: I0223 13:23:02.082456 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 13:23:02.085275 master-0 kubenswrapper[26474]: I0223 13:23:02.085198 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6v58v"] Feb 23 13:23:02.197322 master-0 kubenswrapper[26474]: I0223 13:23:02.197213 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr42n\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-kube-api-access-rr42n\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.197963 master-0 kubenswrapper[26474]: I0223 13:23:02.197904 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.299037 master-0 kubenswrapper[26474]: I0223 13:23:02.298963 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.299244 master-0 kubenswrapper[26474]: I0223 13:23:02.299183 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr42n\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-kube-api-access-rr42n\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.315148 master-0 kubenswrapper[26474]: I0223 13:23:02.315068 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr42n\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-kube-api-access-rr42n\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.317390 master-0 kubenswrapper[26474]: I0223 13:23:02.317272 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/422b2519-74b7-4386-9eb1-557dd53be8e8-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6v58v\" (UID: \"422b2519-74b7-4386-9eb1-557dd53be8e8\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.391635 master-0 kubenswrapper[26474]: I0223 13:23:02.391524 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" Feb 23 13:23:02.861442 master-0 kubenswrapper[26474]: W0223 13:23:02.861380 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422b2519_74b7_4386_9eb1_557dd53be8e8.slice/crio-50e21ad0d25b602d59673496d60f641a0574f86c6f12ccb9266f5df3ad20ad0c WatchSource:0}: Error finding container 50e21ad0d25b602d59673496d60f641a0574f86c6f12ccb9266f5df3ad20ad0c: Status 404 returned error can't find the container with id 50e21ad0d25b602d59673496d60f641a0574f86c6f12ccb9266f5df3ad20ad0c Feb 23 13:23:02.863456 master-0 kubenswrapper[26474]: I0223 13:23:02.863330 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6v58v"] Feb 23 13:23:03.032525 master-0 kubenswrapper[26474]: I0223 13:23:03.032463 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" event={"ID":"422b2519-74b7-4386-9eb1-557dd53be8e8","Type":"ContainerStarted","Data":"50e21ad0d25b602d59673496d60f641a0574f86c6f12ccb9266f5df3ad20ad0c"} Feb 23 13:23:04.031194 master-0 kubenswrapper[26474]: I0223 13:23:04.031136 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bpgsl"] Feb 23 13:23:04.032458 master-0 kubenswrapper[26474]: I0223 13:23:04.032429 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" Feb 23 13:23:04.034876 master-0 kubenswrapper[26474]: I0223 13:23:04.034836 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 13:23:04.035214 master-0 kubenswrapper[26474]: I0223 13:23:04.035191 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 13:23:04.055491 master-0 kubenswrapper[26474]: I0223 13:23:04.055416 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bpgsl"] Feb 23 13:23:04.140238 master-0 kubenswrapper[26474]: I0223 13:23:04.140160 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgmn\" (UniqueName: \"kubernetes.io/projected/cd299e55-d6cc-4482-937f-5c4c9248b7d6-kube-api-access-stgmn\") pod \"nmstate-operator-694c9596b7-bpgsl\" (UID: \"cd299e55-d6cc-4482-937f-5c4c9248b7d6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" Feb 23 13:23:04.241270 master-0 kubenswrapper[26474]: I0223 13:23:04.241153 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgmn\" (UniqueName: \"kubernetes.io/projected/cd299e55-d6cc-4482-937f-5c4c9248b7d6-kube-api-access-stgmn\") pod \"nmstate-operator-694c9596b7-bpgsl\" (UID: \"cd299e55-d6cc-4482-937f-5c4c9248b7d6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" Feb 23 13:23:04.263650 master-0 kubenswrapper[26474]: I0223 13:23:04.263602 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgmn\" (UniqueName: \"kubernetes.io/projected/cd299e55-d6cc-4482-937f-5c4c9248b7d6-kube-api-access-stgmn\") pod \"nmstate-operator-694c9596b7-bpgsl\" (UID: \"cd299e55-d6cc-4482-937f-5c4c9248b7d6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" Feb 23 13:23:04.352863 master-0 kubenswrapper[26474]: I0223 13:23:04.352734 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" Feb 23 13:23:04.764234 master-0 kubenswrapper[26474]: I0223 13:23:04.764115 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bpgsl"] Feb 23 13:23:04.766293 master-0 kubenswrapper[26474]: W0223 13:23:04.766204 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd299e55_d6cc_4482_937f_5c4c9248b7d6.slice/crio-0c01318b710ff643422ad36d0f0de6cd7b81ef711598736ce3c289d0b0231f8c WatchSource:0}: Error finding container 0c01318b710ff643422ad36d0f0de6cd7b81ef711598736ce3c289d0b0231f8c: Status 404 returned error can't find the container with id 0c01318b710ff643422ad36d0f0de6cd7b81ef711598736ce3c289d0b0231f8c Feb 23 13:23:04.898369 master-0 kubenswrapper[26474]: I0223 13:23:04.895199 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kq9nr"] Feb 23 13:23:04.898369 master-0 kubenswrapper[26474]: I0223 13:23:04.896182 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:04.909298 master-0 kubenswrapper[26474]: I0223 13:23:04.909171 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kq9nr"] Feb 23 13:23:04.958613 master-0 kubenswrapper[26474]: I0223 13:23:04.958541 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxgt\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-kube-api-access-xfxgt\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:04.958841 master-0 kubenswrapper[26474]: I0223 13:23:04.958672 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.049940 master-0 kubenswrapper[26474]: I0223 13:23:05.049800 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" event={"ID":"cd299e55-d6cc-4482-937f-5c4c9248b7d6","Type":"ContainerStarted","Data":"0c01318b710ff643422ad36d0f0de6cd7b81ef711598736ce3c289d0b0231f8c"} Feb 23 13:23:05.059990 master-0 kubenswrapper[26474]: I0223 13:23:05.059896 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxgt\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-kube-api-access-xfxgt\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.060190 master-0 kubenswrapper[26474]: I0223 13:23:05.060017 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.075017 master-0 kubenswrapper[26474]: I0223 13:23:05.074964 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.075486 master-0 kubenswrapper[26474]: I0223 13:23:05.075456 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxgt\" (UniqueName: \"kubernetes.io/projected/2c8674c6-7547-4de1-8485-998b3556fe78-kube-api-access-xfxgt\") pod \"cert-manager-webhook-6888856db4-kq9nr\" (UID: \"2c8674c6-7547-4de1-8485-998b3556fe78\") " pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.219455 master-0 kubenswrapper[26474]: I0223 13:23:05.216000 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:05.458921 master-0 kubenswrapper[26474]: W0223 13:23:05.458857 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8674c6_7547_4de1_8485_998b3556fe78.slice/crio-47cb277cd727ea259e43d29d314129e6f268aca1e9ddfdcc5e9b8030f6d18184 WatchSource:0}: Error finding container 47cb277cd727ea259e43d29d314129e6f268aca1e9ddfdcc5e9b8030f6d18184: Status 404 returned error can't find the container with id 47cb277cd727ea259e43d29d314129e6f268aca1e9ddfdcc5e9b8030f6d18184 Feb 23 13:23:05.459140 master-0 kubenswrapper[26474]: I0223 13:23:05.459051 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kq9nr"] Feb 23 13:23:06.060181 master-0 kubenswrapper[26474]: I0223 13:23:06.059756 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" event={"ID":"2c8674c6-7547-4de1-8485-998b3556fe78","Type":"ContainerStarted","Data":"47cb277cd727ea259e43d29d314129e6f268aca1e9ddfdcc5e9b8030f6d18184"} Feb 23 13:23:07.462861 master-0 kubenswrapper[26474]: I0223 13:23:07.462784 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n"] Feb 23 13:23:07.463982 master-0 kubenswrapper[26474]: I0223 13:23:07.463959 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.466795 master-0 kubenswrapper[26474]: I0223 13:23:07.466754 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 13:23:07.467262 master-0 kubenswrapper[26474]: I0223 13:23:07.467229 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 13:23:07.467698 master-0 kubenswrapper[26474]: I0223 13:23:07.467483 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 13:23:07.467698 master-0 kubenswrapper[26474]: I0223 13:23:07.467654 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 13:23:07.484983 master-0 kubenswrapper[26474]: I0223 13:23:07.484927 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n"] Feb 23 13:23:07.527730 master-0 kubenswrapper[26474]: I0223 13:23:07.527663 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-apiservice-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.527932 master-0 kubenswrapper[26474]: I0223 13:23:07.527745 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84lc\" (UniqueName: \"kubernetes.io/projected/b482fa9f-094a-41d1-8259-fc9d625f0b65-kube-api-access-g84lc\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.527932 master-0 kubenswrapper[26474]: I0223 13:23:07.527844 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-webhook-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.637362 master-0 kubenswrapper[26474]: I0223 13:23:07.629029 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-apiservice-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.637362 master-0 kubenswrapper[26474]: I0223 13:23:07.629112 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84lc\" (UniqueName: \"kubernetes.io/projected/b482fa9f-094a-41d1-8259-fc9d625f0b65-kube-api-access-g84lc\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.637362 master-0 kubenswrapper[26474]: I0223 13:23:07.629193 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-webhook-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.657528 master-0 kubenswrapper[26474]: I0223 13:23:07.657480 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84lc\" (UniqueName: \"kubernetes.io/projected/b482fa9f-094a-41d1-8259-fc9d625f0b65-kube-api-access-g84lc\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.659289 master-0 kubenswrapper[26474]: I0223 13:23:07.659239 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-webhook-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.664400 master-0 kubenswrapper[26474]: I0223 13:23:07.664356 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b482fa9f-094a-41d1-8259-fc9d625f0b65-apiservice-cert\") pod \"metallb-operator-controller-manager-699c7b98cc-68v7n\" (UID: \"b482fa9f-094a-41d1-8259-fc9d625f0b65\") " pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.800361 master-0 kubenswrapper[26474]: I0223 13:23:07.796998 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:07.921487 master-0 kubenswrapper[26474]: I0223 13:23:07.921426 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t"] Feb 23 13:23:07.924279 master-0 kubenswrapper[26474]: I0223 13:23:07.922360 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:07.933717 master-0 kubenswrapper[26474]: I0223 13:23:07.932093 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:23:07.933717 master-0 kubenswrapper[26474]: I0223 13:23:07.932825 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 13:23:07.946853 master-0 kubenswrapper[26474]: I0223 13:23:07.946768 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t"] Feb 23 13:23:08.048638 master-0 kubenswrapper[26474]: I0223 13:23:08.044232 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mt6t\" (UniqueName: \"kubernetes.io/projected/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-kube-api-access-9mt6t\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.048638 master-0 kubenswrapper[26474]: I0223 13:23:08.044303 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-webhook-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.048638 master-0 kubenswrapper[26474]: I0223 13:23:08.044327 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.146431 master-0 kubenswrapper[26474]: I0223 13:23:08.145974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mt6t\" (UniqueName: \"kubernetes.io/projected/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-kube-api-access-9mt6t\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.146431 master-0 kubenswrapper[26474]: I0223 13:23:08.146054 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-webhook-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.146431 master-0 kubenswrapper[26474]: I0223 13:23:08.146078 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.153363 master-0 kubenswrapper[26474]: I0223 13:23:08.149522 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.153363 master-0 kubenswrapper[26474]: I0223 13:23:08.152982 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-webhook-cert\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.176566 master-0 kubenswrapper[26474]: I0223 13:23:08.174946 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mt6t\" (UniqueName: \"kubernetes.io/projected/7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da-kube-api-access-9mt6t\") pod \"metallb-operator-webhook-server-8bb78c4ff-kc79t\" (UID: \"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da\") " pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:08.277292 master-0 kubenswrapper[26474]: I0223 13:23:08.276712 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:09.748016 master-0 kubenswrapper[26474]: W0223 13:23:09.747962 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb482fa9f_094a_41d1_8259_fc9d625f0b65.slice/crio-28feb95a91cab44e5d42bc42836184475489d508a6bc427e2e845a7e106108da WatchSource:0}: Error finding container 28feb95a91cab44e5d42bc42836184475489d508a6bc427e2e845a7e106108da: Status 404 returned error can't find the container with id 28feb95a91cab44e5d42bc42836184475489d508a6bc427e2e845a7e106108da Feb 23 13:23:09.761354 master-0 kubenswrapper[26474]: I0223 13:23:09.761244 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n"] Feb 23 13:23:09.809369 master-0 kubenswrapper[26474]: I0223 13:23:09.807499 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t"] Feb 23 13:23:10.105309 master-0 kubenswrapper[26474]: I0223 13:23:10.105243 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" event={"ID":"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da","Type":"ContainerStarted","Data":"e279abaeb61ef00688c1a7ba4404de8da1090edde74693efe340210745f511f3"} Feb 23 13:23:10.107079 master-0 kubenswrapper[26474]: I0223 13:23:10.107009 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" event={"ID":"cd299e55-d6cc-4482-937f-5c4c9248b7d6","Type":"ContainerStarted","Data":"84200bb14ca468034d17f4ee9a87ee7d600ae115476a9015fa13fb4dd0389d3f"} Feb 23 13:23:10.109514 master-0 kubenswrapper[26474]: I0223 13:23:10.109467 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" event={"ID":"2c8674c6-7547-4de1-8485-998b3556fe78","Type":"ContainerStarted","Data":"437593b2daa736e2f346f997bc0c4ceeb7530af7dc4037b64db3b64dc4dd63da"} Feb 23 13:23:10.109606 master-0 kubenswrapper[26474]: I0223 13:23:10.109579 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:10.111096 master-0 kubenswrapper[26474]: I0223 13:23:10.111050 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" event={"ID":"b482fa9f-094a-41d1-8259-fc9d625f0b65","Type":"ContainerStarted","Data":"28feb95a91cab44e5d42bc42836184475489d508a6bc427e2e845a7e106108da"} Feb 23 13:23:10.112960 master-0 kubenswrapper[26474]: I0223 13:23:10.112915 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" event={"ID":"422b2519-74b7-4386-9eb1-557dd53be8e8","Type":"ContainerStarted","Data":"7328da7e102ff52d1bee238c4735f07809983d1f3be789bd43c07b1ee788048d"} Feb 23 13:23:10.123828 master-0 kubenswrapper[26474]: I0223 13:23:10.123754 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-bpgsl" podStartSLOduration=2.703576941 podStartE2EDuration="7.123737267s" podCreationTimestamp="2026-02-23 13:23:03 +0000 UTC" firstStartedPulling="2026-02-23 13:23:04.773089987 +0000 UTC m=+506.619597684" lastFinishedPulling="2026-02-23 13:23:09.193250333 +0000 UTC m=+511.039758010" observedRunningTime="2026-02-23 13:23:10.123378579 +0000 UTC m=+511.969886266" watchObservedRunningTime="2026-02-23 13:23:10.123737267 +0000 UTC m=+511.970244954" Feb 23 13:23:10.148362 master-0 kubenswrapper[26474]: I0223 13:23:10.147929 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-6v58v" podStartSLOduration=1.824576507 podStartE2EDuration="8.14790762s" podCreationTimestamp="2026-02-23 13:23:02 +0000 UTC" firstStartedPulling="2026-02-23 13:23:02.863795123 +0000 UTC m=+504.710302810" lastFinishedPulling="2026-02-23 13:23:09.187126246 +0000 UTC m=+511.033633923" observedRunningTime="2026-02-23 13:23:10.140542543 +0000 UTC m=+511.987050240" watchObservedRunningTime="2026-02-23 13:23:10.14790762 +0000 UTC m=+511.994415297" Feb 23 13:23:10.168396 master-0 kubenswrapper[26474]: I0223 13:23:10.168009 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" podStartSLOduration=2.4332848240000002 podStartE2EDuration="6.167990066s" podCreationTimestamp="2026-02-23 13:23:04 +0000 UTC" firstStartedPulling="2026-02-23 13:23:05.461296508 +0000 UTC m=+507.307804175" lastFinishedPulling="2026-02-23 13:23:09.19600173 +0000 UTC m=+511.042509417" observedRunningTime="2026-02-23 13:23:10.164095342 +0000 UTC m=+512.010603009" watchObservedRunningTime="2026-02-23 13:23:10.167990066 +0000 UTC m=+512.014497743" Feb 23 13:23:15.224365 master-0 kubenswrapper[26474]: I0223 13:23:15.223852 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-kq9nr" Feb 23 13:23:16.668182 master-0 kubenswrapper[26474]: I0223 13:23:16.668050 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk"] Feb 23 13:23:16.669257 master-0 kubenswrapper[26474]: I0223 13:23:16.669228 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" Feb 23 13:23:16.671707 master-0 kubenswrapper[26474]: I0223 13:23:16.671647 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 23 13:23:16.672145 master-0 kubenswrapper[26474]: I0223 13:23:16.672116 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 23 13:23:16.682123 master-0 kubenswrapper[26474]: I0223 13:23:16.680735 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk"] Feb 23 13:23:16.777166 master-0 kubenswrapper[26474]: I0223 13:23:16.775817 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fgp\" (UniqueName: \"kubernetes.io/projected/b5575a74-3386-4e23-9773-00bcf98e0bac-kube-api-access-k2fgp\") pod \"obo-prometheus-operator-68bc856cb9-fmglk\" (UID: \"b5575a74-3386-4e23-9773-00bcf98e0bac\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" Feb 23 13:23:16.808795 master-0 kubenswrapper[26474]: I0223 13:23:16.808659 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf"] Feb 23 13:23:16.819299 master-0 kubenswrapper[26474]: I0223 13:23:16.819128 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:16.824066 master-0 kubenswrapper[26474]: I0223 13:23:16.824000 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 23 13:23:16.825738 master-0 kubenswrapper[26474]: I0223 13:23:16.825313 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7"] Feb 23 13:23:16.827207 master-0 kubenswrapper[26474]: I0223 13:23:16.827166 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:16.831932 master-0 kubenswrapper[26474]: I0223 13:23:16.831832 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf"] Feb 23 13:23:16.870314 master-0 kubenswrapper[26474]: I0223 13:23:16.870252 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7"] Feb 23 13:23:16.877636 master-0 kubenswrapper[26474]: I0223 13:23:16.877576 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fgp\" (UniqueName: \"kubernetes.io/projected/b5575a74-3386-4e23-9773-00bcf98e0bac-kube-api-access-k2fgp\") pod \"obo-prometheus-operator-68bc856cb9-fmglk\" (UID: \"b5575a74-3386-4e23-9773-00bcf98e0bac\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" Feb 23 13:23:16.897641 master-0 kubenswrapper[26474]: I0223 13:23:16.897577 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fgp\" (UniqueName: \"kubernetes.io/projected/b5575a74-3386-4e23-9773-00bcf98e0bac-kube-api-access-k2fgp\") pod \"obo-prometheus-operator-68bc856cb9-fmglk\" (UID: \"b5575a74-3386-4e23-9773-00bcf98e0bac\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" Feb 23 13:23:16.944864 master-0 kubenswrapper[26474]: I0223 13:23:16.941400 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-smk8t"] Feb 23 13:23:16.944864 master-0 kubenswrapper[26474]: I0223 13:23:16.942469 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:16.949693 master-0 kubenswrapper[26474]: I0223 13:23:16.949650 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 23 13:23:16.952305 master-0 kubenswrapper[26474]: I0223 13:23:16.952249 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-smk8t"] Feb 23 13:23:16.978670 master-0 kubenswrapper[26474]: I0223 13:23:16.978586 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:16.978916 master-0 kubenswrapper[26474]: I0223 13:23:16.978701 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:16.979010 master-0 kubenswrapper[26474]: I0223 13:23:16.978961 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:16.979080 master-0 kubenswrapper[26474]: I0223 13:23:16.979059 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:16.995778 master-0 kubenswrapper[26474]: I0223 13:23:16.995682 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.080818 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kd69\" (UniqueName: \"kubernetes.io/projected/2a503fce-4dda-476b-9e4f-385dc3719110-kube-api-access-4kd69\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.080939 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.080996 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.081048 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.081080 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a503fce-4dda-476b-9e4f-385dc3719110-observability-operator-tls\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.083737 master-0 kubenswrapper[26474]: I0223 13:23:17.081126 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:17.085916 master-0 kubenswrapper[26474]: I0223 13:23:17.085859 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:17.086393 master-0 kubenswrapper[26474]: I0223 13:23:17.086312 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:17.087957 master-0 kubenswrapper[26474]: I0223 13:23:17.087915 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/113dd403-f463-46cb-a047-78ef18e80695-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-xttq7\" (UID: \"113dd403-f463-46cb-a047-78ef18e80695\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:17.093759 master-0 kubenswrapper[26474]: I0223 13:23:17.093706 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6b66e0a-2452-40f6-8e9f-348deb4ea44d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf\" (UID: \"a6b66e0a-2452-40f6-8e9f-348deb4ea44d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:17.130777 master-0 kubenswrapper[26474]: I0223 13:23:17.130677 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-66zv7"] Feb 23 13:23:17.132369 master-0 kubenswrapper[26474]: I0223 13:23:17.132319 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.157628 master-0 kubenswrapper[26474]: I0223 13:23:17.156118 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-66zv7"] Feb 23 13:23:17.167384 master-0 kubenswrapper[26474]: I0223 13:23:17.164721 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" Feb 23 13:23:17.183129 master-0 kubenswrapper[26474]: I0223 13:23:17.183050 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kd69\" (UniqueName: \"kubernetes.io/projected/2a503fce-4dda-476b-9e4f-385dc3719110-kube-api-access-4kd69\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.183414 master-0 kubenswrapper[26474]: I0223 13:23:17.183167 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a503fce-4dda-476b-9e4f-385dc3719110-observability-operator-tls\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.187035 master-0 kubenswrapper[26474]: I0223 13:23:17.186991 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a503fce-4dda-476b-9e4f-385dc3719110-observability-operator-tls\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.202262 master-0 kubenswrapper[26474]: I0223 13:23:17.193023 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" Feb 23 13:23:17.215355 master-0 kubenswrapper[26474]: I0223 13:23:17.215282 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kd69\" (UniqueName: \"kubernetes.io/projected/2a503fce-4dda-476b-9e4f-385dc3719110-kube-api-access-4kd69\") pod \"observability-operator-59bdc8b94-smk8t\" (UID: \"2a503fce-4dda-476b-9e4f-385dc3719110\") " pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.296780 master-0 kubenswrapper[26474]: I0223 13:23:17.294913 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:17.296780 master-0 kubenswrapper[26474]: I0223 13:23:17.295674 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd538084-e726-46dc-8a79-6f62d46784df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.296780 master-0 kubenswrapper[26474]: I0223 13:23:17.295813 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7f59\" (UniqueName: \"kubernetes.io/projected/dd538084-e726-46dc-8a79-6f62d46784df-kube-api-access-k7f59\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.305093 master-0 kubenswrapper[26474]: I0223 13:23:17.301314 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" event={"ID":"b482fa9f-094a-41d1-8259-fc9d625f0b65","Type":"ContainerStarted","Data":"df1bc6c2fcc11e2172b48b315e22a1717192ee0d772a85f6c8e4d0a8abe47b11"} Feb 23 13:23:17.305093 master-0 kubenswrapper[26474]: I0223 13:23:17.301617 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:17.305093 master-0 kubenswrapper[26474]: I0223 13:23:17.303638 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" event={"ID":"7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da","Type":"ContainerStarted","Data":"01dda61cef76d56a579b7d6864b0110ac4983542d84993f16ba5ec25ebb24042"} Feb 23 13:23:17.305093 master-0 kubenswrapper[26474]: I0223 13:23:17.304459 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:17.384232 master-0 kubenswrapper[26474]: I0223 13:23:17.383108 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" podStartSLOduration=3.827943646 podStartE2EDuration="10.383080087s" podCreationTimestamp="2026-02-23 13:23:07 +0000 UTC" firstStartedPulling="2026-02-23 13:23:09.814718224 +0000 UTC m=+511.661225901" lastFinishedPulling="2026-02-23 13:23:16.369854665 +0000 UTC m=+518.216362342" observedRunningTime="2026-02-23 13:23:17.366746732 +0000 UTC m=+519.213254429" watchObservedRunningTime="2026-02-23 13:23:17.383080087 +0000 UTC m=+519.229587764" Feb 23 13:23:17.385993 master-0 kubenswrapper[26474]: I0223 13:23:17.385511 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" podStartSLOduration=3.788048502 podStartE2EDuration="10.385504505s" podCreationTimestamp="2026-02-23 13:23:07 +0000 UTC" firstStartedPulling="2026-02-23 13:23:09.74996097 +0000 UTC m=+511.596468637" lastFinishedPulling="2026-02-23 13:23:16.347416963 +0000 UTC m=+518.193924640" observedRunningTime="2026-02-23 13:23:17.341730338 +0000 UTC m=+519.188238025" watchObservedRunningTime="2026-02-23 13:23:17.385504505 +0000 UTC m=+519.232012182" Feb 23 13:23:17.397966 master-0 kubenswrapper[26474]: I0223 13:23:17.397418 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd538084-e726-46dc-8a79-6f62d46784df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.397966 master-0 kubenswrapper[26474]: I0223 13:23:17.397538 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7f59\" (UniqueName: \"kubernetes.io/projected/dd538084-e726-46dc-8a79-6f62d46784df-kube-api-access-k7f59\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.404267 master-0 kubenswrapper[26474]: I0223 13:23:17.400408 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd538084-e726-46dc-8a79-6f62d46784df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.410767 master-0 kubenswrapper[26474]: I0223 13:23:17.409490 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-cjlpr"] Feb 23 13:23:17.412982 master-0 kubenswrapper[26474]: I0223 13:23:17.411199 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.424505 master-0 kubenswrapper[26474]: I0223 13:23:17.424449 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7f59\" (UniqueName: \"kubernetes.io/projected/dd538084-e726-46dc-8a79-6f62d46784df-kube-api-access-k7f59\") pod \"perses-operator-5bf474d74f-66zv7\" (UID: \"dd538084-e726-46dc-8a79-6f62d46784df\") " pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.426734 master-0 kubenswrapper[26474]: I0223 13:23:17.426677 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-cjlpr"] Feb 23 13:23:17.496263 master-0 kubenswrapper[26474]: I0223 13:23:17.496189 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:17.500804 master-0 kubenswrapper[26474]: I0223 13:23:17.500693 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrksm\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-kube-api-access-wrksm\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.500887 master-0 kubenswrapper[26474]: I0223 13:23:17.500814 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-bound-sa-token\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.546315 master-0 kubenswrapper[26474]: I0223 13:23:17.545365 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk"] Feb 23 13:23:17.561719 master-0 kubenswrapper[26474]: W0223 13:23:17.561658 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5575a74_3386_4e23_9773_00bcf98e0bac.slice/crio-af01238c9a4864f6726e7e4b1170152162f5d4fa6c0c311379e18adc8d143837 WatchSource:0}: Error finding container af01238c9a4864f6726e7e4b1170152162f5d4fa6c0c311379e18adc8d143837: Status 404 returned error can't find the container with id af01238c9a4864f6726e7e4b1170152162f5d4fa6c0c311379e18adc8d143837 Feb 23 13:23:17.604049 master-0 kubenswrapper[26474]: I0223 13:23:17.602628 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-bound-sa-token\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.604049 master-0 kubenswrapper[26474]: I0223 13:23:17.602801 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrksm\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-kube-api-access-wrksm\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.623203 master-0 kubenswrapper[26474]: I0223 13:23:17.623012 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-bound-sa-token\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.625519 master-0 kubenswrapper[26474]: I0223 13:23:17.624442 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrksm\" (UniqueName: \"kubernetes.io/projected/85a9c5a9-c76a-4d3c-b839-71c812f5abe2-kube-api-access-wrksm\") pod \"cert-manager-545d4d4674-cjlpr\" (UID: \"85a9c5a9-c76a-4d3c-b839-71c812f5abe2\") " pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.665459 master-0 kubenswrapper[26474]: I0223 13:23:17.665374 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf"] Feb 23 13:23:17.789217 master-0 kubenswrapper[26474]: I0223 13:23:17.789155 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-smk8t"] Feb 23 13:23:17.824624 master-0 kubenswrapper[26474]: I0223 13:23:17.824570 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-cjlpr" Feb 23 13:23:17.889328 master-0 kubenswrapper[26474]: I0223 13:23:17.889229 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7"] Feb 23 13:23:18.011625 master-0 kubenswrapper[26474]: I0223 13:23:18.011530 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-66zv7"] Feb 23 13:23:18.312359 master-0 kubenswrapper[26474]: I0223 13:23:18.309716 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-cjlpr"] Feb 23 13:23:18.312359 master-0 kubenswrapper[26474]: I0223 13:23:18.312038 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" event={"ID":"b5575a74-3386-4e23-9773-00bcf98e0bac","Type":"ContainerStarted","Data":"af01238c9a4864f6726e7e4b1170152162f5d4fa6c0c311379e18adc8d143837"} Feb 23 13:23:18.320444 master-0 kubenswrapper[26474]: I0223 13:23:18.317298 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" event={"ID":"dd538084-e726-46dc-8a79-6f62d46784df","Type":"ContainerStarted","Data":"66e0336b8640261bc654d1319e906fc2bc15cb10637d9fffd97da84cc5d92524"} Feb 23 13:23:18.322847 master-0 kubenswrapper[26474]: I0223 13:23:18.322772 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" event={"ID":"113dd403-f463-46cb-a047-78ef18e80695","Type":"ContainerStarted","Data":"7ff3e3b89b29565347352871c7bbef1f54b66229c31e3e9f8897cf8bc9905710"} Feb 23 13:23:18.325167 master-0 kubenswrapper[26474]: I0223 13:23:18.324937 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" event={"ID":"a6b66e0a-2452-40f6-8e9f-348deb4ea44d","Type":"ContainerStarted","Data":"dc73b1e8f7879979d0891e556f753de41e4daf30649343d8cd8ecd4342d6f034"} Feb 23 13:23:18.330135 master-0 kubenswrapper[26474]: I0223 13:23:18.329513 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" event={"ID":"2a503fce-4dda-476b-9e4f-385dc3719110","Type":"ContainerStarted","Data":"520645e33e18748b41b37e506379583fa5b9925237370d1d0970105e5b5ced7f"} Feb 23 13:23:19.342322 master-0 kubenswrapper[26474]: I0223 13:23:19.340541 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-cjlpr" event={"ID":"85a9c5a9-c76a-4d3c-b839-71c812f5abe2","Type":"ContainerStarted","Data":"f1b40411adf9591ef9d02acf82ecfe0ae4fabf0d52c1422c85d5faa4e6a90eb8"} Feb 23 13:23:19.342322 master-0 kubenswrapper[26474]: I0223 13:23:19.340650 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-cjlpr" event={"ID":"85a9c5a9-c76a-4d3c-b839-71c812f5abe2","Type":"ContainerStarted","Data":"f24a0c5bf37eba64046fd82fc318984fe8bfc01e9d166cf63c34da8b51dcc588"} Feb 23 13:23:19.480472 master-0 kubenswrapper[26474]: I0223 13:23:19.480325 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-cjlpr" podStartSLOduration=2.4802971400000002 podStartE2EDuration="2.48029714s" podCreationTimestamp="2026-02-23 13:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:19.469300354 +0000 UTC m=+521.315808051" watchObservedRunningTime="2026-02-23 13:23:19.48029714 +0000 UTC m=+521.326804837" Feb 23 13:23:26.404023 master-0 kubenswrapper[26474]: I0223 13:23:26.403972 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:26.404536 master-0 kubenswrapper[26474]: I0223 13:23:26.404025 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" event={"ID":"2a503fce-4dda-476b-9e4f-385dc3719110","Type":"ContainerStarted","Data":"204f117cf1b1ba48717216057b821f1a3efa38a2745d182609d876377475846e"} Feb 23 13:23:26.408111 master-0 kubenswrapper[26474]: I0223 13:23:26.408057 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" event={"ID":"b5575a74-3386-4e23-9773-00bcf98e0bac","Type":"ContainerStarted","Data":"ce9407823b0e0d1ca6a989023393e23df35e38dc4405a4dd14f4d85ea4a1061d"} Feb 23 13:23:26.409672 master-0 kubenswrapper[26474]: I0223 13:23:26.409622 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" event={"ID":"dd538084-e726-46dc-8a79-6f62d46784df","Type":"ContainerStarted","Data":"c54c242d64467af55f8115a4bab2538c735b2d6ca3c5d0849c6db18bb23de290"} Feb 23 13:23:26.409844 master-0 kubenswrapper[26474]: I0223 13:23:26.409814 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:26.411390 master-0 kubenswrapper[26474]: I0223 13:23:26.411350 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" event={"ID":"113dd403-f463-46cb-a047-78ef18e80695","Type":"ContainerStarted","Data":"793ba62aa7bc40b02cdc44ea6099d5e8b0652a5e0e6cab587efddc730ad54a67"} Feb 23 13:23:26.413080 master-0 kubenswrapper[26474]: I0223 13:23:26.413046 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" event={"ID":"a6b66e0a-2452-40f6-8e9f-348deb4ea44d","Type":"ContainerStarted","Data":"de9019a1ef284a4692cd17a3155b840a9e4ac00887fd2ec1494d81f3aa1e8ce3"} Feb 23 13:23:26.438263 master-0 kubenswrapper[26474]: I0223 13:23:26.438163 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" podStartSLOduration=2.642511986 podStartE2EDuration="10.438140058s" podCreationTimestamp="2026-02-23 13:23:16 +0000 UTC" firstStartedPulling="2026-02-23 13:23:17.803068141 +0000 UTC m=+519.649575818" lastFinishedPulling="2026-02-23 13:23:25.598696213 +0000 UTC m=+527.445203890" observedRunningTime="2026-02-23 13:23:26.434031798 +0000 UTC m=+528.280539475" watchObservedRunningTime="2026-02-23 13:23:26.438140058 +0000 UTC m=+528.284647735" Feb 23 13:23:26.496361 master-0 kubenswrapper[26474]: I0223 13:23:26.493850 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-smk8t" Feb 23 13:23:26.520358 master-0 kubenswrapper[26474]: I0223 13:23:26.518067 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fmglk" podStartSLOduration=2.493157268 podStartE2EDuration="10.518041587s" podCreationTimestamp="2026-02-23 13:23:16 +0000 UTC" firstStartedPulling="2026-02-23 13:23:17.566477297 +0000 UTC m=+519.412984964" lastFinishedPulling="2026-02-23 13:23:25.591361606 +0000 UTC m=+527.437869283" observedRunningTime="2026-02-23 13:23:26.473046201 +0000 UTC m=+528.319553868" watchObservedRunningTime="2026-02-23 13:23:26.518041587 +0000 UTC m=+528.364549264" Feb 23 13:23:26.526317 master-0 kubenswrapper[26474]: I0223 13:23:26.525046 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-j4rcf" podStartSLOduration=2.643430168 podStartE2EDuration="10.525030396s" podCreationTimestamp="2026-02-23 13:23:16 +0000 UTC" firstStartedPulling="2026-02-23 13:23:17.671865312 +0000 UTC m=+519.518372989" lastFinishedPulling="2026-02-23 13:23:25.55346552 +0000 UTC m=+527.399973217" observedRunningTime="2026-02-23 13:23:26.522970787 +0000 UTC m=+528.369478464" watchObservedRunningTime="2026-02-23 13:23:26.525030396 +0000 UTC m=+528.371538073" Feb 23 13:23:26.590103 master-0 kubenswrapper[26474]: I0223 13:23:26.589937 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-748c49b44b-xttq7" podStartSLOduration=2.945399961 podStartE2EDuration="10.589920263s" podCreationTimestamp="2026-02-23 13:23:16 +0000 UTC" firstStartedPulling="2026-02-23 13:23:17.903779283 +0000 UTC m=+519.750287000" lastFinishedPulling="2026-02-23 13:23:25.548299625 +0000 UTC m=+527.394807302" observedRunningTime="2026-02-23 13:23:26.574742927 +0000 UTC m=+528.421250614" watchObservedRunningTime="2026-02-23 13:23:26.589920263 +0000 UTC m=+528.436427960" Feb 23 13:23:26.622076 master-0 kubenswrapper[26474]: I0223 13:23:26.621997 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" podStartSLOduration=2.037016672 podStartE2EDuration="9.621979217s" podCreationTimestamp="2026-02-23 13:23:17 +0000 UTC" firstStartedPulling="2026-02-23 13:23:18.005779456 +0000 UTC m=+519.852287133" lastFinishedPulling="2026-02-23 13:23:25.590742001 +0000 UTC m=+527.437249678" observedRunningTime="2026-02-23 13:23:26.617222533 +0000 UTC m=+528.463730210" watchObservedRunningTime="2026-02-23 13:23:26.621979217 +0000 UTC m=+528.468486894" Feb 23 13:23:28.284145 master-0 kubenswrapper[26474]: I0223 13:23:28.284076 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8bb78c4ff-kc79t" Feb 23 13:23:37.500237 master-0 kubenswrapper[26474]: I0223 13:23:37.500166 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-66zv7" Feb 23 13:23:41.025935 master-0 kubenswrapper[26474]: I0223 13:23:41.025828 26474 scope.go:117] "RemoveContainer" containerID="2e3d12f7546ed9dc911e6b0badc88fa73138850feb384e2188c5098c9007f1a4" Feb 23 13:23:47.804366 master-0 kubenswrapper[26474]: I0223 13:23:47.804266 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-699c7b98cc-68v7n" Feb 23 13:23:55.445367 master-0 kubenswrapper[26474]: I0223 13:23:55.440857 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d"] Feb 23 13:23:55.445367 master-0 kubenswrapper[26474]: I0223 13:23:55.442112 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.451371 master-0 kubenswrapper[26474]: I0223 13:23:55.447093 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 13:23:55.455362 master-0 kubenswrapper[26474]: I0223 13:23:55.454269 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8mmw5"] Feb 23 13:23:55.459380 master-0 kubenswrapper[26474]: I0223 13:23:55.458241 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.463365 master-0 kubenswrapper[26474]: I0223 13:23:55.460677 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d"] Feb 23 13:23:55.474407 master-0 kubenswrapper[26474]: I0223 13:23:55.473731 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 13:23:55.479373 master-0 kubenswrapper[26474]: I0223 13:23:55.474965 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 13:23:55.541260 master-0 kubenswrapper[26474]: I0223 13:23:55.541195 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9tms9"] Feb 23 13:23:55.543535 master-0 kubenswrapper[26474]: I0223 13:23:55.543503 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.546472 master-0 kubenswrapper[26474]: I0223 13:23:55.546423 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 13:23:55.547523 master-0 kubenswrapper[26474]: I0223 13:23:55.547500 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 13:23:55.547599 master-0 kubenswrapper[26474]: I0223 13:23:55.547551 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 13:23:55.559102 master-0 kubenswrapper[26474]: I0223 13:23:55.559054 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-6bhkv"] Feb 23 13:23:55.561800 master-0 kubenswrapper[26474]: I0223 13:23:55.561772 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.568066 master-0 kubenswrapper[26474]: I0223 13:23:55.567983 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 13:23:55.575689 master-0 kubenswrapper[26474]: I0223 13:23:55.575637 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-6bhkv"] Feb 23 13:23:55.644056 master-0 kubenswrapper[26474]: I0223 13:23:55.643890 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmv8\" (UniqueName: \"kubernetes.io/projected/cdefb8f6-8066-4ce4-b76c-5b19831081c9-kube-api-access-xdmv8\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.644279 master-0 kubenswrapper[26474]: I0223 13:23:55.644063 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-sockets\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.644279 master-0 kubenswrapper[26474]: I0223 13:23:55.644099 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metrics-certs\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.644279 master-0 kubenswrapper[26474]: I0223 13:23:55.644226 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xn7\" (UniqueName: \"kubernetes.io/projected/96d9fb56-251d-4ab0-97b6-63645853e820-kube-api-access-72xn7\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.644407 master-0 kubenswrapper[26474]: I0223 13:23:55.644307 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96d9fb56-251d-4ab0-97b6-63645853e820-metrics-certs\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.644407 master-0 kubenswrapper[26474]: I0223 13:23:55.644334 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvjd\" (UniqueName: \"kubernetes.io/projected/7d38b74b-692d-400d-9cb0-bdfe09afc08f-kube-api-access-2zvjd\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.644407 master-0 kubenswrapper[26474]: I0223 13:23:55.644373 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xh6\" (UniqueName: \"kubernetes.io/projected/c274d379-166c-4181-9e3b-8a27e4bfcc8e-kube-api-access-79xh6\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.644501 master-0 kubenswrapper[26474]: I0223 13:23:55.644470 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-cert\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.644589 master-0 kubenswrapper[26474]: I0223 13:23:55.644565 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.644644 master-0 kubenswrapper[26474]: I0223 13:23:55.644622 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-conf\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.644715 master-0 kubenswrapper[26474]: I0223 13:23:55.644687 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d38b74b-692d-400d-9cb0-bdfe09afc08f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.644755 master-0 kubenswrapper[26474]: I0223 13:23:55.644739 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metallb-excludel2\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.644809 master-0 kubenswrapper[26474]: I0223 13:23:55.644785 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-metrics\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.645118 master-0 kubenswrapper[26474]: I0223 13:23:55.645041 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96d9fb56-251d-4ab0-97b6-63645853e820-frr-startup\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.645209 master-0 kubenswrapper[26474]: I0223 13:23:55.645179 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-reloader\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.645275 master-0 kubenswrapper[26474]: I0223 13:23:55.645247 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-metrics-certs\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.747666 master-0 kubenswrapper[26474]: I0223 13:23:55.747517 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-metrics-certs\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.747666 master-0 kubenswrapper[26474]: I0223 13:23:55.747622 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmv8\" (UniqueName: \"kubernetes.io/projected/cdefb8f6-8066-4ce4-b76c-5b19831081c9-kube-api-access-xdmv8\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.747898 master-0 kubenswrapper[26474]: I0223 13:23:55.747813 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-sockets\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.747898 master-0 kubenswrapper[26474]: I0223 13:23:55.747882 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metrics-certs\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.748353 master-0 kubenswrapper[26474]: I0223 13:23:55.748306 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xn7\" (UniqueName: \"kubernetes.io/projected/96d9fb56-251d-4ab0-97b6-63645853e820-kube-api-access-72xn7\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.748485 master-0 kubenswrapper[26474]: I0223 13:23:55.748371 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-sockets\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.748485 master-0 kubenswrapper[26474]: I0223 13:23:55.748415 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96d9fb56-251d-4ab0-97b6-63645853e820-metrics-certs\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.748485 master-0 kubenswrapper[26474]: I0223 13:23:55.748443 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvjd\" (UniqueName: \"kubernetes.io/projected/7d38b74b-692d-400d-9cb0-bdfe09afc08f-kube-api-access-2zvjd\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.748609 master-0 kubenswrapper[26474]: I0223 13:23:55.748582 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xh6\" (UniqueName: \"kubernetes.io/projected/c274d379-166c-4181-9e3b-8a27e4bfcc8e-kube-api-access-79xh6\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.748642 master-0 kubenswrapper[26474]: I0223 13:23:55.748625 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-cert\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.748675 master-0 kubenswrapper[26474]: I0223 13:23:55.748648 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.748675 master-0 kubenswrapper[26474]: I0223 13:23:55.748663 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-conf\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.748777 master-0 kubenswrapper[26474]: I0223 13:23:55.748753 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d38b74b-692d-400d-9cb0-bdfe09afc08f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.748813 master-0 kubenswrapper[26474]: I0223 13:23:55.748778 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metallb-excludel2\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.749093 master-0 kubenswrapper[26474]: I0223 13:23:55.749053 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-frr-conf\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.749281 master-0 kubenswrapper[26474]: E0223 13:23:55.749253 26474 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:23:55.749317 master-0 kubenswrapper[26474]: E0223 13:23:55.749303 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist podName:cdefb8f6-8066-4ce4-b76c-5b19831081c9 nodeName:}" failed. No retries permitted until 2026-02-23 13:23:56.24928987 +0000 UTC m=+558.095797547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist") pod "speaker-9tms9" (UID: "cdefb8f6-8066-4ce4-b76c-5b19831081c9") : secret "metallb-memberlist" not found Feb 23 13:23:55.749432 master-0 kubenswrapper[26474]: I0223 13:23:55.749253 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-metrics\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.749533 master-0 kubenswrapper[26474]: I0223 13:23:55.749499 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metallb-excludel2\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.749584 master-0 kubenswrapper[26474]: I0223 13:23:55.749540 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96d9fb56-251d-4ab0-97b6-63645853e820-frr-startup\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.749617 master-0 kubenswrapper[26474]: I0223 13:23:55.749599 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-reloader\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.750069 master-0 kubenswrapper[26474]: I0223 13:23:55.750035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-reloader\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.750323 master-0 kubenswrapper[26474]: I0223 13:23:55.750295 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96d9fb56-251d-4ab0-97b6-63645853e820-frr-startup\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.751417 master-0 kubenswrapper[26474]: I0223 13:23:55.751330 26474 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:23:55.751907 master-0 kubenswrapper[26474]: I0223 13:23:55.751874 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96d9fb56-251d-4ab0-97b6-63645853e820-metrics\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.752377 master-0 kubenswrapper[26474]: I0223 13:23:55.752319 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96d9fb56-251d-4ab0-97b6-63645853e820-metrics-certs\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.752680 master-0 kubenswrapper[26474]: I0223 13:23:55.752634 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d38b74b-692d-400d-9cb0-bdfe09afc08f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.752983 master-0 kubenswrapper[26474]: I0223 13:23:55.752924 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-metrics-certs\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.764274 master-0 kubenswrapper[26474]: I0223 13:23:55.764227 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-metrics-certs\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.767864 master-0 kubenswrapper[26474]: I0223 13:23:55.767832 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xh6\" (UniqueName: \"kubernetes.io/projected/c274d379-166c-4181-9e3b-8a27e4bfcc8e-kube-api-access-79xh6\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.767943 master-0 kubenswrapper[26474]: I0223 13:23:55.767833 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xn7\" (UniqueName: \"kubernetes.io/projected/96d9fb56-251d-4ab0-97b6-63645853e820-kube-api-access-72xn7\") pod \"frr-k8s-8mmw5\" (UID: \"96d9fb56-251d-4ab0-97b6-63645853e820\") " pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.768033 master-0 kubenswrapper[26474]: I0223 13:23:55.767984 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c274d379-166c-4181-9e3b-8a27e4bfcc8e-cert\") pod \"controller-69bbfbf88f-6bhkv\" (UID: \"c274d379-166c-4181-9e3b-8a27e4bfcc8e\") " pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:55.769064 master-0 kubenswrapper[26474]: I0223 13:23:55.769025 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmv8\" (UniqueName: \"kubernetes.io/projected/cdefb8f6-8066-4ce4-b76c-5b19831081c9-kube-api-access-xdmv8\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:55.769974 master-0 kubenswrapper[26474]: I0223 13:23:55.769902 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvjd\" (UniqueName: \"kubernetes.io/projected/7d38b74b-692d-400d-9cb0-bdfe09afc08f-kube-api-access-2zvjd\") pod \"frr-k8s-webhook-server-78b44bf5bb-rmg9d\" (UID: \"7d38b74b-692d-400d-9cb0-bdfe09afc08f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.794735 master-0 kubenswrapper[26474]: I0223 13:23:55.794656 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:23:55.812170 master-0 kubenswrapper[26474]: I0223 13:23:55.812106 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:23:55.885229 master-0 kubenswrapper[26474]: I0223 13:23:55.885006 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:56.202802 master-0 kubenswrapper[26474]: I0223 13:23:56.202736 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d"] Feb 23 13:23:56.264968 master-0 kubenswrapper[26474]: I0223 13:23:56.264833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:56.264968 master-0 kubenswrapper[26474]: E0223 13:23:56.264954 26474 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:23:56.265227 master-0 kubenswrapper[26474]: E0223 13:23:56.265009 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist podName:cdefb8f6-8066-4ce4-b76c-5b19831081c9 nodeName:}" failed. No retries permitted until 2026-02-23 13:23:57.264994177 +0000 UTC m=+559.111501844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist") pod "speaker-9tms9" (UID: "cdefb8f6-8066-4ce4-b76c-5b19831081c9") : secret "metallb-memberlist" not found Feb 23 13:23:56.363925 master-0 kubenswrapper[26474]: I0223 13:23:56.363874 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-6bhkv"] Feb 23 13:23:56.366747 master-0 kubenswrapper[26474]: W0223 13:23:56.366685 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc274d379_166c_4181_9e3b_8a27e4bfcc8e.slice/crio-4ac151f27a1f7e11fc8a041e0dade643bc0064beecb440f31bf00f72d8117b2c WatchSource:0}: Error finding container 4ac151f27a1f7e11fc8a041e0dade643bc0064beecb440f31bf00f72d8117b2c: Status 404 returned error can't find the container with id 4ac151f27a1f7e11fc8a041e0dade643bc0064beecb440f31bf00f72d8117b2c Feb 23 13:23:56.697187 master-0 kubenswrapper[26474]: I0223 13:23:56.697000 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6bhkv" event={"ID":"c274d379-166c-4181-9e3b-8a27e4bfcc8e","Type":"ContainerStarted","Data":"057beca93f04e191c0785c82097827ee16a48c50f9ffbbbb817721ab83e8c33a"} Feb 23 13:23:56.697187 master-0 kubenswrapper[26474]: I0223 13:23:56.697089 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6bhkv" event={"ID":"c274d379-166c-4181-9e3b-8a27e4bfcc8e","Type":"ContainerStarted","Data":"4ac151f27a1f7e11fc8a041e0dade643bc0064beecb440f31bf00f72d8117b2c"} Feb 23 13:23:56.699669 master-0 kubenswrapper[26474]: I0223 13:23:56.699627 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"950d764b00371295f45efa05b394656ab5c961ce33d56038c36d3fa13729a50c"} Feb 23 13:23:56.701938 master-0 kubenswrapper[26474]: I0223 13:23:56.701869 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" event={"ID":"7d38b74b-692d-400d-9cb0-bdfe09afc08f","Type":"ContainerStarted","Data":"676a20e946fafe448612adac1eafb8bb0a4c04bcddaf225ecc54319663f0ee33"} Feb 23 13:23:57.283440 master-0 kubenswrapper[26474]: I0223 13:23:57.283332 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:57.287923 master-0 kubenswrapper[26474]: I0223 13:23:57.287859 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdefb8f6-8066-4ce4-b76c-5b19831081c9-memberlist\") pod \"speaker-9tms9\" (UID: \"cdefb8f6-8066-4ce4-b76c-5b19831081c9\") " pod="metallb-system/speaker-9tms9" Feb 23 13:23:57.361993 master-0 kubenswrapper[26474]: I0223 13:23:57.361898 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9tms9" Feb 23 13:23:57.423601 master-0 kubenswrapper[26474]: W0223 13:23:57.423550 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdefb8f6_8066_4ce4_b76c_5b19831081c9.slice/crio-31b69d5716bcba6c10f72a47a3ec9eb66b1ff875fb391b9ef5e41b16f3f75ced WatchSource:0}: Error finding container 31b69d5716bcba6c10f72a47a3ec9eb66b1ff875fb391b9ef5e41b16f3f75ced: Status 404 returned error can't find the container with id 31b69d5716bcba6c10f72a47a3ec9eb66b1ff875fb391b9ef5e41b16f3f75ced Feb 23 13:23:57.468876 master-0 kubenswrapper[26474]: I0223 13:23:57.468817 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t"] Feb 23 13:23:57.470666 master-0 kubenswrapper[26474]: I0223 13:23:57.470627 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" Feb 23 13:23:57.487096 master-0 kubenswrapper[26474]: I0223 13:23:57.487054 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nts85\" (UniqueName: \"kubernetes.io/projected/b242a27c-8dd3-43ee-af46-4dd04ffd1cce-kube-api-access-nts85\") pod \"nmstate-metrics-58c85c668d-4nr9t\" (UID: \"b242a27c-8dd3-43ee-af46-4dd04ffd1cce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" Feb 23 13:23:57.498850 master-0 kubenswrapper[26474]: I0223 13:23:57.498797 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q"] Feb 23 13:23:57.500951 master-0 kubenswrapper[26474]: I0223 13:23:57.500924 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.503173 master-0 kubenswrapper[26474]: I0223 13:23:57.503135 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 13:23:57.512437 master-0 kubenswrapper[26474]: I0223 13:23:57.512112 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t"] Feb 23 13:23:57.521881 master-0 kubenswrapper[26474]: I0223 13:23:57.521640 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q"] Feb 23 13:23:57.529814 master-0 kubenswrapper[26474]: I0223 13:23:57.529762 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lsm7x"] Feb 23 13:23:57.530904 master-0 kubenswrapper[26474]: I0223 13:23:57.530878 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.593184 master-0 kubenswrapper[26474]: I0223 13:23:57.592902 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/897817d0-59c1-45ca-afe1-9d525ad217d4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.593184 master-0 kubenswrapper[26474]: I0223 13:23:57.593026 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlnvt\" (UniqueName: \"kubernetes.io/projected/897817d0-59c1-45ca-afe1-9d525ad217d4-kube-api-access-zlnvt\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.599508 master-0 kubenswrapper[26474]: I0223 13:23:57.596123 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-ovs-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.599508 master-0 kubenswrapper[26474]: I0223 13:23:57.596209 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-nmstate-lock\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.599508 master-0 kubenswrapper[26474]: I0223 13:23:57.596321 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nts85\" (UniqueName: \"kubernetes.io/projected/b242a27c-8dd3-43ee-af46-4dd04ffd1cce-kube-api-access-nts85\") pod \"nmstate-metrics-58c85c668d-4nr9t\" (UID: \"b242a27c-8dd3-43ee-af46-4dd04ffd1cce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" Feb 23 13:23:57.599508 master-0 kubenswrapper[26474]: I0223 13:23:57.596423 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fc86\" (UniqueName: \"kubernetes.io/projected/09806916-edb0-4917-a1c1-73690ff280cd-kube-api-access-9fc86\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.599508 master-0 kubenswrapper[26474]: I0223 13:23:57.596469 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-dbus-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.629814 master-0 kubenswrapper[26474]: I0223 13:23:57.627564 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nts85\" (UniqueName: \"kubernetes.io/projected/b242a27c-8dd3-43ee-af46-4dd04ffd1cce-kube-api-access-nts85\") pod \"nmstate-metrics-58c85c668d-4nr9t\" (UID: \"b242a27c-8dd3-43ee-af46-4dd04ffd1cce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" Feb 23 13:23:57.675245 master-0 kubenswrapper[26474]: I0223 13:23:57.675181 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8"] Feb 23 13:23:57.676618 master-0 kubenswrapper[26474]: I0223 13:23:57.676584 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.679979 master-0 kubenswrapper[26474]: I0223 13:23:57.679947 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 13:23:57.681022 master-0 kubenswrapper[26474]: I0223 13:23:57.680106 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 13:23:57.686569 master-0 kubenswrapper[26474]: I0223 13:23:57.686483 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8"] Feb 23 13:23:57.698657 master-0 kubenswrapper[26474]: I0223 13:23:57.698609 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcc8\" (UniqueName: \"kubernetes.io/projected/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-kube-api-access-6qcc8\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.700263 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/897817d0-59c1-45ca-afe1-9d525ad217d4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.700304 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlnvt\" (UniqueName: \"kubernetes.io/projected/897817d0-59c1-45ca-afe1-9d525ad217d4-kube-api-access-zlnvt\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.700469 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-ovs-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.700527 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-nmstate-lock\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702320 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-ovs-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702409 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-nmstate-lock\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702608 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702661 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702839 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fc86\" (UniqueName: \"kubernetes.io/projected/09806916-edb0-4917-a1c1-73690ff280cd-kube-api-access-9fc86\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.702889 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-dbus-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.713670 master-0 kubenswrapper[26474]: I0223 13:23:57.703054 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09806916-edb0-4917-a1c1-73690ff280cd-dbus-socket\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.715128 master-0 kubenswrapper[26474]: I0223 13:23:57.715104 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/897817d0-59c1-45ca-afe1-9d525ad217d4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.721710 master-0 kubenswrapper[26474]: I0223 13:23:57.721668 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlnvt\" (UniqueName: \"kubernetes.io/projected/897817d0-59c1-45ca-afe1-9d525ad217d4-kube-api-access-zlnvt\") pod \"nmstate-webhook-866bcb46dc-qf29q\" (UID: \"897817d0-59c1-45ca-afe1-9d525ad217d4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.728669 master-0 kubenswrapper[26474]: I0223 13:23:57.728617 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tms9" event={"ID":"cdefb8f6-8066-4ce4-b76c-5b19831081c9","Type":"ContainerStarted","Data":"31b69d5716bcba6c10f72a47a3ec9eb66b1ff875fb391b9ef5e41b16f3f75ced"} Feb 23 13:23:57.731797 master-0 kubenswrapper[26474]: I0223 13:23:57.731599 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6bhkv" event={"ID":"c274d379-166c-4181-9e3b-8a27e4bfcc8e","Type":"ContainerStarted","Data":"4038b0e1a6192a51c5021471623d715894dad6e60872708b81cca52888ccc167"} Feb 23 13:23:57.732089 master-0 kubenswrapper[26474]: I0223 13:23:57.732059 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:23:57.736186 master-0 kubenswrapper[26474]: I0223 13:23:57.736147 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fc86\" (UniqueName: \"kubernetes.io/projected/09806916-edb0-4917-a1c1-73690ff280cd-kube-api-access-9fc86\") pod \"nmstate-handler-lsm7x\" (UID: \"09806916-edb0-4917-a1c1-73690ff280cd\") " pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.761547 master-0 kubenswrapper[26474]: I0223 13:23:57.761452 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-6bhkv" podStartSLOduration=1.835591593 podStartE2EDuration="2.761427803s" podCreationTimestamp="2026-02-23 13:23:55 +0000 UTC" firstStartedPulling="2026-02-23 13:23:56.484711083 +0000 UTC m=+558.331218760" lastFinishedPulling="2026-02-23 13:23:57.410547293 +0000 UTC m=+559.257054970" observedRunningTime="2026-02-23 13:23:57.755911669 +0000 UTC m=+559.602419357" watchObservedRunningTime="2026-02-23 13:23:57.761427803 +0000 UTC m=+559.607935480" Feb 23 13:23:57.798086 master-0 kubenswrapper[26474]: I0223 13:23:57.797969 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" Feb 23 13:23:57.804280 master-0 kubenswrapper[26474]: I0223 13:23:57.804230 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcc8\" (UniqueName: \"kubernetes.io/projected/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-kube-api-access-6qcc8\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.804764 master-0 kubenswrapper[26474]: I0223 13:23:57.804714 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.804825 master-0 kubenswrapper[26474]: I0223 13:23:57.804763 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.805995 master-0 kubenswrapper[26474]: I0223 13:23:57.805966 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.810444 master-0 kubenswrapper[26474]: I0223 13:23:57.809756 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.821571 master-0 kubenswrapper[26474]: I0223 13:23:57.821528 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcc8\" (UniqueName: \"kubernetes.io/projected/1ab1d91b-91b5-47ce-aefb-b42c7e651fdd-kube-api-access-6qcc8\") pod \"nmstate-console-plugin-5c78fc5d65-4j4d8\" (UID: \"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:57.868063 master-0 kubenswrapper[26474]: I0223 13:23:57.868008 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:23:57.868522 master-0 kubenswrapper[26474]: I0223 13:23:57.868504 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74b4bc9995-5bzrc"] Feb 23 13:23:57.869585 master-0 kubenswrapper[26474]: I0223 13:23:57.869554 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.882960 master-0 kubenswrapper[26474]: I0223 13:23:57.882900 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74b4bc9995-5bzrc"] Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917298 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917639 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-console-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917687 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-oauth-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917734 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917764 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-trusted-ca-bundle\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917818 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phr2b\" (UniqueName: \"kubernetes.io/projected/76aba90e-5944-46b7-8ef9-cad8855264c7-kube-api-access-phr2b\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917847 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-oauth-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:57.928367 master-0 kubenswrapper[26474]: I0223 13:23:57.917890 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-service-ca\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.020637 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.021910 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phr2b\" (UniqueName: \"kubernetes.io/projected/76aba90e-5944-46b7-8ef9-cad8855264c7-kube-api-access-phr2b\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.021967 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-oauth-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.022005 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-service-ca\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.022033 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-console-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.022047 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-oauth-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.022082 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.022107 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-trusted-ca-bundle\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.023368 master-0 kubenswrapper[26474]: I0223 13:23:58.023292 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-trusted-ca-bundle\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.024093 master-0 kubenswrapper[26474]: I0223 13:23:58.023290 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-service-ca\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.024126 master-0 kubenswrapper[26474]: I0223 13:23:58.024094 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-console-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.027365 master-0 kubenswrapper[26474]: I0223 13:23:58.024386 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76aba90e-5944-46b7-8ef9-cad8855264c7-oauth-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.041772 master-0 kubenswrapper[26474]: I0223 13:23:58.038071 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-serving-cert\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.043484 master-0 kubenswrapper[26474]: I0223 13:23:58.042893 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76aba90e-5944-46b7-8ef9-cad8855264c7-console-oauth-config\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.063514 master-0 kubenswrapper[26474]: I0223 13:23:58.063471 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phr2b\" (UniqueName: \"kubernetes.io/projected/76aba90e-5944-46b7-8ef9-cad8855264c7-kube-api-access-phr2b\") pod \"console-74b4bc9995-5bzrc\" (UID: \"76aba90e-5944-46b7-8ef9-cad8855264c7\") " pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.070849 master-0 kubenswrapper[26474]: I0223 13:23:58.070812 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:23:58.437442 master-0 kubenswrapper[26474]: W0223 13:23:58.437398 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb242a27c_8dd3_43ee_af46_4dd04ffd1cce.slice/crio-f8df9596238689cc489ae0957d7e8afe38b39802fa69f5a9383079512d900209 WatchSource:0}: Error finding container f8df9596238689cc489ae0957d7e8afe38b39802fa69f5a9383079512d900209: Status 404 returned error can't find the container with id f8df9596238689cc489ae0957d7e8afe38b39802fa69f5a9383079512d900209 Feb 23 13:23:58.439357 master-0 kubenswrapper[26474]: I0223 13:23:58.439318 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t"] Feb 23 13:23:58.539744 master-0 kubenswrapper[26474]: I0223 13:23:58.538583 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q"] Feb 23 13:23:58.594376 master-0 kubenswrapper[26474]: I0223 13:23:58.594306 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74b4bc9995-5bzrc"] Feb 23 13:23:58.596460 master-0 kubenswrapper[26474]: W0223 13:23:58.596416 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76aba90e_5944_46b7_8ef9_cad8855264c7.slice/crio-7a255d22f590eb5766d776c63ca140c316056276c9cf8540bf02281f30c327e9 WatchSource:0}: Error finding container 7a255d22f590eb5766d776c63ca140c316056276c9cf8540bf02281f30c327e9: Status 404 returned error can't find the container with id 7a255d22f590eb5766d776c63ca140c316056276c9cf8540bf02281f30c327e9 Feb 23 13:23:58.685975 master-0 kubenswrapper[26474]: I0223 13:23:58.685935 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8"] Feb 23 13:23:58.688875 master-0 kubenswrapper[26474]: W0223 13:23:58.688804 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ab1d91b_91b5_47ce_aefb_b42c7e651fdd.slice/crio-72f181a83299616cb0404937beff78d6cf2167ce21b97c60b82b748c74a6d982 WatchSource:0}: Error finding container 72f181a83299616cb0404937beff78d6cf2167ce21b97c60b82b748c74a6d982: Status 404 returned error can't find the container with id 72f181a83299616cb0404937beff78d6cf2167ce21b97c60b82b748c74a6d982 Feb 23 13:23:58.748025 master-0 kubenswrapper[26474]: I0223 13:23:58.747956 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74b4bc9995-5bzrc" event={"ID":"76aba90e-5944-46b7-8ef9-cad8855264c7","Type":"ContainerStarted","Data":"1c55924e8cb7c8a2d18ce24ce0f71b3b4b5f4d4a44479eb0bfa9f5bae2c9e20f"} Feb 23 13:23:58.748533 master-0 kubenswrapper[26474]: I0223 13:23:58.748079 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74b4bc9995-5bzrc" event={"ID":"76aba90e-5944-46b7-8ef9-cad8855264c7","Type":"ContainerStarted","Data":"7a255d22f590eb5766d776c63ca140c316056276c9cf8540bf02281f30c327e9"} Feb 23 13:23:58.751032 master-0 kubenswrapper[26474]: I0223 13:23:58.750983 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tms9" event={"ID":"cdefb8f6-8066-4ce4-b76c-5b19831081c9","Type":"ContainerStarted","Data":"ed859120db7bc6d48dec883235fada7bae8518261573ca9398f36dcfd3459b2b"} Feb 23 13:23:58.751137 master-0 kubenswrapper[26474]: I0223 13:23:58.751038 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tms9" event={"ID":"cdefb8f6-8066-4ce4-b76c-5b19831081c9","Type":"ContainerStarted","Data":"c5f03c00367bd662fc83b7b36b83790cfee67b33df61f1d1c7d3c05372a963d7"} Feb 23 13:23:58.751177 master-0 kubenswrapper[26474]: I0223 13:23:58.751150 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9tms9" Feb 23 13:23:58.753012 master-0 kubenswrapper[26474]: I0223 13:23:58.752953 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" event={"ID":"897817d0-59c1-45ca-afe1-9d525ad217d4","Type":"ContainerStarted","Data":"7fae66cde8cc73933257d4c00a0513f37b90c940e895b0d9cdb7ca60c6ef5f6d"} Feb 23 13:23:58.754698 master-0 kubenswrapper[26474]: I0223 13:23:58.754664 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lsm7x" event={"ID":"09806916-edb0-4917-a1c1-73690ff280cd","Type":"ContainerStarted","Data":"758fca30abf424b7493ee8c66d29414dc0c8ee2bde9109653211e4ebaa4d9a33"} Feb 23 13:23:58.756094 master-0 kubenswrapper[26474]: I0223 13:23:58.756045 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" event={"ID":"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd","Type":"ContainerStarted","Data":"72f181a83299616cb0404937beff78d6cf2167ce21b97c60b82b748c74a6d982"} Feb 23 13:23:58.757160 master-0 kubenswrapper[26474]: I0223 13:23:58.757135 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" event={"ID":"b242a27c-8dd3-43ee-af46-4dd04ffd1cce","Type":"ContainerStarted","Data":"f8df9596238689cc489ae0957d7e8afe38b39802fa69f5a9383079512d900209"} Feb 23 13:23:58.794297 master-0 kubenswrapper[26474]: I0223 13:23:58.794222 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9tms9" podStartSLOduration=3.79420327 podStartE2EDuration="3.79420327s" podCreationTimestamp="2026-02-23 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:58.793286038 +0000 UTC m=+560.639793735" watchObservedRunningTime="2026-02-23 13:23:58.79420327 +0000 UTC m=+560.640710947" Feb 23 13:23:58.798827 master-0 kubenswrapper[26474]: I0223 13:23:58.798784 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74b4bc9995-5bzrc" podStartSLOduration=1.79877373 podStartE2EDuration="1.79877373s" podCreationTimestamp="2026-02-23 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:58.770409004 +0000 UTC m=+560.616916691" watchObservedRunningTime="2026-02-23 13:23:58.79877373 +0000 UTC m=+560.645281407" Feb 23 13:24:03.819172 master-0 kubenswrapper[26474]: I0223 13:24:03.819116 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" event={"ID":"7d38b74b-692d-400d-9cb0-bdfe09afc08f","Type":"ContainerStarted","Data":"0b0cc03da347c501d9bda4672b95ce5e12b7036245d68e67c7aca55673781c1b"} Feb 23 13:24:03.819733 master-0 kubenswrapper[26474]: I0223 13:24:03.819291 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:24:03.820972 master-0 kubenswrapper[26474]: I0223 13:24:03.820942 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" event={"ID":"897817d0-59c1-45ca-afe1-9d525ad217d4","Type":"ContainerStarted","Data":"b0bad6d57c8dfd0b65f690fe0ad72f5fa7cdb84766f28b21ae6feea367af4145"} Feb 23 13:24:03.821070 master-0 kubenswrapper[26474]: I0223 13:24:03.821046 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:24:03.823554 master-0 kubenswrapper[26474]: I0223 13:24:03.823503 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lsm7x" event={"ID":"09806916-edb0-4917-a1c1-73690ff280cd","Type":"ContainerStarted","Data":"c0068a71510751af8d4079116f40601d224d0bc2bc6c2b8d34a44350b30bd048"} Feb 23 13:24:03.823725 master-0 kubenswrapper[26474]: I0223 13:24:03.823694 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:24:03.826088 master-0 kubenswrapper[26474]: I0223 13:24:03.826050 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" event={"ID":"1ab1d91b-91b5-47ce-aefb-b42c7e651fdd","Type":"ContainerStarted","Data":"250aa91b4f533a586a4d970a4cbba75cd90509fc322ab03e23b346b6433b33e0"} Feb 23 13:24:03.828986 master-0 kubenswrapper[26474]: I0223 13:24:03.828952 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" event={"ID":"b242a27c-8dd3-43ee-af46-4dd04ffd1cce","Type":"ContainerStarted","Data":"4cc4aef8b5302135aaabb79eea62c15d9b6e2646166f08f6342a4017af788acd"} Feb 23 13:24:03.829084 master-0 kubenswrapper[26474]: I0223 13:24:03.828994 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" event={"ID":"b242a27c-8dd3-43ee-af46-4dd04ffd1cce","Type":"ContainerStarted","Data":"779ef35105802e80fb2e95266db91dc25103e77b964730dea757982b28d61408"} Feb 23 13:24:03.831127 master-0 kubenswrapper[26474]: I0223 13:24:03.831072 26474 generic.go:334] "Generic (PLEG): container finished" podID="96d9fb56-251d-4ab0-97b6-63645853e820" containerID="843164e9c9875212db32f60a111d7cabcb0169756b9066daab267c1fffbac02e" exitCode=0 Feb 23 13:24:03.831190 master-0 kubenswrapper[26474]: I0223 13:24:03.831134 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerDied","Data":"843164e9c9875212db32f60a111d7cabcb0169756b9066daab267c1fffbac02e"} Feb 23 13:24:03.843863 master-0 kubenswrapper[26474]: I0223 13:24:03.843787 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" podStartSLOduration=1.980093749 podStartE2EDuration="8.843767532s" podCreationTimestamp="2026-02-23 13:23:55 +0000 UTC" firstStartedPulling="2026-02-23 13:23:56.205265062 +0000 UTC m=+558.051772739" lastFinishedPulling="2026-02-23 13:24:03.068938835 +0000 UTC m=+564.915446522" observedRunningTime="2026-02-23 13:24:03.83794008 +0000 UTC m=+565.684447767" watchObservedRunningTime="2026-02-23 13:24:03.843767532 +0000 UTC m=+565.690275209" Feb 23 13:24:03.856402 master-0 kubenswrapper[26474]: I0223 13:24:03.856308 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4j4d8" podStartSLOduration=2.468955926 podStartE2EDuration="6.856290075s" podCreationTimestamp="2026-02-23 13:23:57 +0000 UTC" firstStartedPulling="2026-02-23 13:23:58.69128483 +0000 UTC m=+560.537792507" lastFinishedPulling="2026-02-23 13:24:03.078618979 +0000 UTC m=+564.925126656" observedRunningTime="2026-02-23 13:24:03.856191032 +0000 UTC m=+565.702698729" watchObservedRunningTime="2026-02-23 13:24:03.856290075 +0000 UTC m=+565.702797752" Feb 23 13:24:03.938239 master-0 kubenswrapper[26474]: I0223 13:24:03.937998 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4nr9t" podStartSLOduration=2.303277417 podStartE2EDuration="6.937973321s" podCreationTimestamp="2026-02-23 13:23:57 +0000 UTC" firstStartedPulling="2026-02-23 13:23:58.440736458 +0000 UTC m=+560.287244135" lastFinishedPulling="2026-02-23 13:24:03.075432362 +0000 UTC m=+564.921940039" observedRunningTime="2026-02-23 13:24:03.922161428 +0000 UTC m=+565.768669125" watchObservedRunningTime="2026-02-23 13:24:03.937973321 +0000 UTC m=+565.784481018" Feb 23 13:24:03.956712 master-0 kubenswrapper[26474]: I0223 13:24:03.952788 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lsm7x" podStartSLOduration=1.937172099 podStartE2EDuration="6.952762348s" podCreationTimestamp="2026-02-23 13:23:57 +0000 UTC" firstStartedPulling="2026-02-23 13:23:58.049081132 +0000 UTC m=+559.895588809" lastFinishedPulling="2026-02-23 13:24:03.064671381 +0000 UTC m=+564.911179058" observedRunningTime="2026-02-23 13:24:03.947557463 +0000 UTC m=+565.794065150" watchObservedRunningTime="2026-02-23 13:24:03.952762348 +0000 UTC m=+565.799270025" Feb 23 13:24:04.865199 master-0 kubenswrapper[26474]: I0223 13:24:04.865146 26474 generic.go:334] "Generic (PLEG): container finished" podID="96d9fb56-251d-4ab0-97b6-63645853e820" containerID="1e594a89420e0c186f689893f91b58f5007fbd7aa2b8394591c66a7f79fc5628" exitCode=0 Feb 23 13:24:04.865811 master-0 kubenswrapper[26474]: I0223 13:24:04.865222 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerDied","Data":"1e594a89420e0c186f689893f91b58f5007fbd7aa2b8394591c66a7f79fc5628"} Feb 23 13:24:04.909230 master-0 kubenswrapper[26474]: I0223 13:24:04.909126 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" podStartSLOduration=3.378555493 podStartE2EDuration="7.909103267s" podCreationTimestamp="2026-02-23 13:23:57 +0000 UTC" firstStartedPulling="2026-02-23 13:23:58.53960332 +0000 UTC m=+560.386110997" lastFinishedPulling="2026-02-23 13:24:03.070151094 +0000 UTC m=+564.916658771" observedRunningTime="2026-02-23 13:24:03.975617832 +0000 UTC m=+565.822125519" watchObservedRunningTime="2026-02-23 13:24:04.909103267 +0000 UTC m=+566.755610964" Feb 23 13:24:05.875628 master-0 kubenswrapper[26474]: I0223 13:24:05.875572 26474 generic.go:334] "Generic (PLEG): container finished" podID="96d9fb56-251d-4ab0-97b6-63645853e820" containerID="0f7216da7799eada932e7eb507f9515b80df0497a29d7fa0ebb2372317b6da4f" exitCode=0 Feb 23 13:24:05.876142 master-0 kubenswrapper[26474]: I0223 13:24:05.875635 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerDied","Data":"0f7216da7799eada932e7eb507f9515b80df0497a29d7fa0ebb2372317b6da4f"} Feb 23 13:24:06.886693 master-0 kubenswrapper[26474]: I0223 13:24:06.886630 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"78282c8e54155a4f91b9e7528cfacac26d8f009f0f0b4433195cb8ca3d4aa3a6"} Feb 23 13:24:06.886693 master-0 kubenswrapper[26474]: I0223 13:24:06.886694 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"c1e0b61a24b36f0af30046680b65181afefcf20fe6ee68a67334c812dd4f5d08"} Feb 23 13:24:06.903738 master-0 kubenswrapper[26474]: I0223 13:24:06.886707 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"53e5e4feb04ea5fac3dc4227d075b8f658bed90706c62ad31eb5e8a30d1c4592"} Feb 23 13:24:06.903738 master-0 kubenswrapper[26474]: I0223 13:24:06.886719 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"a39e2c1d5b1b10a3035eb52c6ba9d90ca1ed693d9a2e8c5314cf84d2b8170b1e"} Feb 23 13:24:06.903738 master-0 kubenswrapper[26474]: I0223 13:24:06.886732 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"944b31fb65544f825a5353a92aeb67840dd2dbac164ca9eaba54b59513181cb0"} Feb 23 13:24:07.364977 master-0 kubenswrapper[26474]: I0223 13:24:07.364906 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9tms9" Feb 23 13:24:07.903440 master-0 kubenswrapper[26474]: I0223 13:24:07.903321 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8mmw5" event={"ID":"96d9fb56-251d-4ab0-97b6-63645853e820","Type":"ContainerStarted","Data":"db621e9d58ba83d5f96201e02244c49e1c6afcac7139f52b9aa10a312c36a541"} Feb 23 13:24:07.904034 master-0 kubenswrapper[26474]: I0223 13:24:07.903588 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:24:07.939823 master-0 kubenswrapper[26474]: I0223 13:24:07.939715 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8mmw5" podStartSLOduration=5.876578713 podStartE2EDuration="12.93968895s" podCreationTimestamp="2026-02-23 13:23:55 +0000 UTC" firstStartedPulling="2026-02-23 13:23:56.006097224 +0000 UTC m=+557.852604901" lastFinishedPulling="2026-02-23 13:24:03.069207461 +0000 UTC m=+564.915715138" observedRunningTime="2026-02-23 13:24:07.931115273 +0000 UTC m=+569.777623000" watchObservedRunningTime="2026-02-23 13:24:07.93968895 +0000 UTC m=+569.786196637" Feb 23 13:24:08.071902 master-0 kubenswrapper[26474]: I0223 13:24:08.071825 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:24:08.071902 master-0 kubenswrapper[26474]: I0223 13:24:08.071900 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:24:08.077666 master-0 kubenswrapper[26474]: I0223 13:24:08.077608 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:24:08.919201 master-0 kubenswrapper[26474]: I0223 13:24:08.919146 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74b4bc9995-5bzrc" Feb 23 13:24:09.015117 master-0 kubenswrapper[26474]: I0223 13:24:09.014837 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:24:10.813208 master-0 kubenswrapper[26474]: I0223 13:24:10.813090 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:24:10.881443 master-0 kubenswrapper[26474]: I0223 13:24:10.881362 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:24:12.956506 master-0 kubenswrapper[26474]: I0223 13:24:12.956455 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lsm7x" Feb 23 13:24:15.802200 master-0 kubenswrapper[26474]: I0223 13:24:15.802110 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-rmg9d" Feb 23 13:24:15.891898 master-0 kubenswrapper[26474]: I0223 13:24:15.891818 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-6bhkv" Feb 23 13:24:17.876194 master-0 kubenswrapper[26474]: I0223 13:24:17.876061 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qf29q" Feb 23 13:24:22.689688 master-0 kubenswrapper[26474]: I0223 13:24:22.686593 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-2njbv"] Feb 23 13:24:22.689688 master-0 kubenswrapper[26474]: I0223 13:24:22.688696 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.690947 master-0 kubenswrapper[26474]: I0223 13:24:22.690888 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 23 13:24:22.712525 master-0 kubenswrapper[26474]: I0223 13:24:22.712445 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-2njbv"] Feb 23 13:24:22.772621 master-0 kubenswrapper[26474]: I0223 13:24:22.772551 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-run-udev\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.772987 master-0 kubenswrapper[26474]: I0223 13:24:22.772958 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-registration-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.773200 master-0 kubenswrapper[26474]: I0223 13:24:22.773174 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-node-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.773452 master-0 kubenswrapper[26474]: I0223 13:24:22.773426 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-pod-volumes-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.773593 master-0 kubenswrapper[26474]: I0223 13:24:22.773568 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-csi-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.773723 master-0 kubenswrapper[26474]: I0223 13:24:22.773706 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-sys\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.774106 master-0 kubenswrapper[26474]: I0223 13:24:22.774078 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-device-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.774256 master-0 kubenswrapper[26474]: I0223 13:24:22.774235 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-file-lock-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.774453 master-0 kubenswrapper[26474]: I0223 13:24:22.774425 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-lvmd-config\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.774650 master-0 kubenswrapper[26474]: I0223 13:24:22.774626 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-metrics-cert\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.774800 master-0 kubenswrapper[26474]: I0223 13:24:22.774772 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pbl\" (UniqueName: \"kubernetes.io/projected/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-kube-api-access-87pbl\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876053 master-0 kubenswrapper[26474]: I0223 13:24:22.875974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-device-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876053 master-0 kubenswrapper[26474]: I0223 13:24:22.876040 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-file-lock-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876073 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-lvmd-config\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876109 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-metrics-cert\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876129 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87pbl\" (UniqueName: \"kubernetes.io/projected/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-kube-api-access-87pbl\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876145 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-run-udev\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876171 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-registration-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876214 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-node-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876245 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-pod-volumes-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876259 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-csi-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876323 master-0 kubenswrapper[26474]: I0223 13:24:22.876279 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-sys\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876639 master-0 kubenswrapper[26474]: I0223 13:24:22.876393 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-sys\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876639 master-0 kubenswrapper[26474]: I0223 13:24:22.876473 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-device-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876743 master-0 kubenswrapper[26474]: I0223 13:24:22.876711 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-file-lock-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876854 master-0 kubenswrapper[26474]: I0223 13:24:22.876807 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-pod-volumes-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.876997 master-0 kubenswrapper[26474]: I0223 13:24:22.876956 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-csi-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.877038 master-0 kubenswrapper[26474]: I0223 13:24:22.876990 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-run-udev\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.877073 master-0 kubenswrapper[26474]: I0223 13:24:22.876978 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-registration-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.877122 master-0 kubenswrapper[26474]: I0223 13:24:22.877088 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-node-plugin-dir\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.877238 master-0 kubenswrapper[26474]: I0223 13:24:22.877192 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-lvmd-config\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.881724 master-0 kubenswrapper[26474]: I0223 13:24:22.881192 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-metrics-cert\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:22.898947 master-0 kubenswrapper[26474]: I0223 13:24:22.898867 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87pbl\" (UniqueName: \"kubernetes.io/projected/dda7cf1a-4ddd-426e-b721-d0b660a12a1c-kube-api-access-87pbl\") pod \"vg-manager-2njbv\" (UID: \"dda7cf1a-4ddd-426e-b721-d0b660a12a1c\") " pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:23.014418 master-0 kubenswrapper[26474]: I0223 13:24:23.014355 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:23.498112 master-0 kubenswrapper[26474]: I0223 13:24:23.498023 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-2njbv"] Feb 23 13:24:23.507774 master-0 kubenswrapper[26474]: W0223 13:24:23.507706 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddda7cf1a_4ddd_426e_b721_d0b660a12a1c.slice/crio-8d36a29647fda898bd1627e54fd112e66d004180974f11f827fe725240974ea3 WatchSource:0}: Error finding container 8d36a29647fda898bd1627e54fd112e66d004180974f11f827fe725240974ea3: Status 404 returned error can't find the container with id 8d36a29647fda898bd1627e54fd112e66d004180974f11f827fe725240974ea3 Feb 23 13:24:24.085349 master-0 kubenswrapper[26474]: I0223 13:24:24.084554 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-2njbv" event={"ID":"dda7cf1a-4ddd-426e-b721-d0b660a12a1c","Type":"ContainerStarted","Data":"2e627b2e6f993b4c0cfb88b4b3c8a425759ed5a4f5e465e81802b1a701904fd3"} Feb 23 13:24:24.085349 master-0 kubenswrapper[26474]: I0223 13:24:24.084633 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-2njbv" event={"ID":"dda7cf1a-4ddd-426e-b721-d0b660a12a1c","Type":"ContainerStarted","Data":"8d36a29647fda898bd1627e54fd112e66d004180974f11f827fe725240974ea3"} Feb 23 13:24:24.122210 master-0 kubenswrapper[26474]: I0223 13:24:24.121896 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-2njbv" podStartSLOduration=2.121866439 podStartE2EDuration="2.121866439s" podCreationTimestamp="2026-02-23 13:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:24:24.113725572 +0000 UTC m=+585.960233259" watchObservedRunningTime="2026-02-23 13:24:24.121866439 +0000 UTC m=+585.968374116" Feb 23 13:24:25.815273 master-0 kubenswrapper[26474]: I0223 13:24:25.815061 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8mmw5" Feb 23 13:24:26.108966 master-0 kubenswrapper[26474]: I0223 13:24:26.108822 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-2njbv_dda7cf1a-4ddd-426e-b721-d0b660a12a1c/vg-manager/0.log" Feb 23 13:24:26.109152 master-0 kubenswrapper[26474]: I0223 13:24:26.108955 26474 generic.go:334] "Generic (PLEG): container finished" podID="dda7cf1a-4ddd-426e-b721-d0b660a12a1c" containerID="2e627b2e6f993b4c0cfb88b4b3c8a425759ed5a4f5e465e81802b1a701904fd3" exitCode=1 Feb 23 13:24:26.109152 master-0 kubenswrapper[26474]: I0223 13:24:26.109053 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-2njbv" event={"ID":"dda7cf1a-4ddd-426e-b721-d0b660a12a1c","Type":"ContainerDied","Data":"2e627b2e6f993b4c0cfb88b4b3c8a425759ed5a4f5e465e81802b1a701904fd3"} Feb 23 13:24:26.110037 master-0 kubenswrapper[26474]: I0223 13:24:26.109927 26474 scope.go:117] "RemoveContainer" containerID="2e627b2e6f993b4c0cfb88b4b3c8a425759ed5a4f5e465e81802b1a701904fd3" Feb 23 13:24:26.495322 master-0 kubenswrapper[26474]: I0223 13:24:26.495133 26474 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 23 13:24:27.042705 master-0 kubenswrapper[26474]: I0223 13:24:27.042447 26474 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-23T13:24:26.495208761Z","Handler":null,"Name":""} Feb 23 13:24:27.048353 master-0 kubenswrapper[26474]: I0223 13:24:27.046597 26474 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 23 13:24:27.048353 master-0 kubenswrapper[26474]: I0223 13:24:27.046657 26474 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 23 13:24:27.134375 master-0 kubenswrapper[26474]: I0223 13:24:27.132265 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-2njbv_dda7cf1a-4ddd-426e-b721-d0b660a12a1c/vg-manager/0.log" Feb 23 13:24:27.134375 master-0 kubenswrapper[26474]: I0223 13:24:27.132329 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-2njbv" event={"ID":"dda7cf1a-4ddd-426e-b721-d0b660a12a1c","Type":"ContainerStarted","Data":"9e3948106fd8821ca29ca5e401eabd8519f93a5ce7fef7ea9fe95c6835ae57be"} Feb 23 13:24:33.015873 master-0 kubenswrapper[26474]: I0223 13:24:33.015772 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:33.018471 master-0 kubenswrapper[26474]: I0223 13:24:33.018412 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:33.201822 master-0 kubenswrapper[26474]: I0223 13:24:33.201747 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:33.203131 master-0 kubenswrapper[26474]: I0223 13:24:33.203098 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-2njbv" Feb 23 13:24:34.069528 master-0 kubenswrapper[26474]: I0223 13:24:34.069416 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-bd776f658-lwrp8" podUID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" containerName="console" containerID="cri-o://f36fcec11f3d8bb1c7f4b1af48da0a6a1d17052a118731f1488660c217cf7447" gracePeriod=15 Feb 23 13:24:34.217379 master-0 kubenswrapper[26474]: I0223 13:24:34.217307 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd776f658-lwrp8_9e1d93bf-9366-4a73-90e2-8fc9acec810b/console/0.log" Feb 23 13:24:34.217554 master-0 kubenswrapper[26474]: I0223 13:24:34.217416 26474 generic.go:334] "Generic (PLEG): container finished" podID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" containerID="f36fcec11f3d8bb1c7f4b1af48da0a6a1d17052a118731f1488660c217cf7447" exitCode=2 Feb 23 13:24:34.217632 master-0 kubenswrapper[26474]: I0223 13:24:34.217469 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd776f658-lwrp8" event={"ID":"9e1d93bf-9366-4a73-90e2-8fc9acec810b","Type":"ContainerDied","Data":"f36fcec11f3d8bb1c7f4b1af48da0a6a1d17052a118731f1488660c217cf7447"} Feb 23 13:24:34.582029 master-0 kubenswrapper[26474]: I0223 13:24:34.581305 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd776f658-lwrp8_9e1d93bf-9366-4a73-90e2-8fc9acec810b/console/0.log" Feb 23 13:24:34.582029 master-0 kubenswrapper[26474]: I0223 13:24:34.581402 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:24:34.661010 master-0 kubenswrapper[26474]: I0223 13:24:34.660946 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661010 master-0 kubenswrapper[26474]: I0223 13:24:34.661013 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jk9\" (UniqueName: \"kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661291 master-0 kubenswrapper[26474]: I0223 13:24:34.661120 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661291 master-0 kubenswrapper[26474]: I0223 13:24:34.661188 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661291 master-0 kubenswrapper[26474]: I0223 13:24:34.661241 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661427 master-0 kubenswrapper[26474]: I0223 13:24:34.661312 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.661427 master-0 kubenswrapper[26474]: I0223 13:24:34.661393 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle\") pod \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\" (UID: \"9e1d93bf-9366-4a73-90e2-8fc9acec810b\") " Feb 23 13:24:34.662433 master-0 kubenswrapper[26474]: I0223 13:24:34.662040 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca" (OuterVolumeSpecName: "service-ca") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:24:34.662433 master-0 kubenswrapper[26474]: I0223 13:24:34.662180 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:24:34.662433 master-0 kubenswrapper[26474]: I0223 13:24:34.662240 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config" (OuterVolumeSpecName: "console-config") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:24:34.662433 master-0 kubenswrapper[26474]: I0223 13:24:34.662411 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:24:34.667137 master-0 kubenswrapper[26474]: I0223 13:24:34.667100 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:24:34.670972 master-0 kubenswrapper[26474]: I0223 13:24:34.670795 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:24:34.672691 master-0 kubenswrapper[26474]: I0223 13:24:34.672630 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9" (OuterVolumeSpecName: "kube-api-access-v4jk9") pod "9e1d93bf-9366-4a73-90e2-8fc9acec810b" (UID: "9e1d93bf-9366-4a73-90e2-8fc9acec810b"). InnerVolumeSpecName "kube-api-access-v4jk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:24:34.763401 master-0 kubenswrapper[26474]: I0223 13:24:34.763299 26474 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763401 master-0 kubenswrapper[26474]: I0223 13:24:34.763378 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4jk9\" (UniqueName: \"kubernetes.io/projected/9e1d93bf-9366-4a73-90e2-8fc9acec810b-kube-api-access-v4jk9\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763401 master-0 kubenswrapper[26474]: I0223 13:24:34.763395 26474 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763401 master-0 kubenswrapper[26474]: I0223 13:24:34.763413 26474 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763902 master-0 kubenswrapper[26474]: I0223 13:24:34.763427 26474 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763902 master-0 kubenswrapper[26474]: I0223 13:24:34.763440 26474 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.763902 master-0 kubenswrapper[26474]: I0223 13:24:34.763457 26474 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1d93bf-9366-4a73-90e2-8fc9acec810b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:35.227770 master-0 kubenswrapper[26474]: I0223 13:24:35.227703 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd776f658-lwrp8_9e1d93bf-9366-4a73-90e2-8fc9acec810b/console/0.log" Feb 23 13:24:35.228436 master-0 kubenswrapper[26474]: I0223 13:24:35.227852 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd776f658-lwrp8" event={"ID":"9e1d93bf-9366-4a73-90e2-8fc9acec810b","Type":"ContainerDied","Data":"78b3b187413aecf7aadd3e6bb8f8a0f50e61056bc8ac371436ec5d2b4f3e0d08"} Feb 23 13:24:35.228436 master-0 kubenswrapper[26474]: I0223 13:24:35.227962 26474 scope.go:117] "RemoveContainer" containerID="f36fcec11f3d8bb1c7f4b1af48da0a6a1d17052a118731f1488660c217cf7447" Feb 23 13:24:35.228436 master-0 kubenswrapper[26474]: I0223 13:24:35.227883 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd776f658-lwrp8" Feb 23 13:24:35.318707 master-0 kubenswrapper[26474]: I0223 13:24:35.318620 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:24:35.392379 master-0 kubenswrapper[26474]: I0223 13:24:35.392300 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-bd776f658-lwrp8"] Feb 23 13:24:35.613556 master-0 kubenswrapper[26474]: I0223 13:24:35.613484 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ngknx"] Feb 23 13:24:35.613979 master-0 kubenswrapper[26474]: E0223 13:24:35.613948 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" containerName="console" Feb 23 13:24:35.613979 master-0 kubenswrapper[26474]: I0223 13:24:35.613976 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" containerName="console" Feb 23 13:24:35.614353 master-0 kubenswrapper[26474]: I0223 13:24:35.614306 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" containerName="console" Feb 23 13:24:35.615236 master-0 kubenswrapper[26474]: I0223 13:24:35.615146 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:35.617519 master-0 kubenswrapper[26474]: I0223 13:24:35.617475 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 13:24:35.617667 master-0 kubenswrapper[26474]: I0223 13:24:35.617657 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 13:24:35.622735 master-0 kubenswrapper[26474]: I0223 13:24:35.622662 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ngknx"] Feb 23 13:24:35.684899 master-0 kubenswrapper[26474]: I0223 13:24:35.684481 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q472m\" (UniqueName: \"kubernetes.io/projected/e79bc2ef-593e-4f36-a1ac-c70064cb331d-kube-api-access-q472m\") pod \"openstack-operator-index-ngknx\" (UID: \"e79bc2ef-593e-4f36-a1ac-c70064cb331d\") " pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:35.786863 master-0 kubenswrapper[26474]: I0223 13:24:35.786794 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q472m\" (UniqueName: \"kubernetes.io/projected/e79bc2ef-593e-4f36-a1ac-c70064cb331d-kube-api-access-q472m\") pod \"openstack-operator-index-ngknx\" (UID: \"e79bc2ef-593e-4f36-a1ac-c70064cb331d\") " pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:35.806390 master-0 kubenswrapper[26474]: I0223 13:24:35.805894 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q472m\" (UniqueName: \"kubernetes.io/projected/e79bc2ef-593e-4f36-a1ac-c70064cb331d-kube-api-access-q472m\") pod \"openstack-operator-index-ngknx\" (UID: \"e79bc2ef-593e-4f36-a1ac-c70064cb331d\") " pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:35.936073 master-0 kubenswrapper[26474]: I0223 13:24:35.935926 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:36.406800 master-0 kubenswrapper[26474]: I0223 13:24:36.406737 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1d93bf-9366-4a73-90e2-8fc9acec810b" path="/var/lib/kubelet/pods/9e1d93bf-9366-4a73-90e2-8fc9acec810b/volumes" Feb 23 13:24:36.407692 master-0 kubenswrapper[26474]: I0223 13:24:36.407665 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ngknx"] Feb 23 13:24:36.408429 master-0 kubenswrapper[26474]: W0223 13:24:36.408402 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode79bc2ef_593e_4f36_a1ac_c70064cb331d.slice/crio-f4e59d8be4fd2e06020d6142c4ed2134d846ef4d2af6f64c5b9c2fecea1f88c6 WatchSource:0}: Error finding container f4e59d8be4fd2e06020d6142c4ed2134d846ef4d2af6f64c5b9c2fecea1f88c6: Status 404 returned error can't find the container with id f4e59d8be4fd2e06020d6142c4ed2134d846ef4d2af6f64c5b9c2fecea1f88c6 Feb 23 13:24:37.253362 master-0 kubenswrapper[26474]: I0223 13:24:37.252024 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ngknx" event={"ID":"e79bc2ef-593e-4f36-a1ac-c70064cb331d","Type":"ContainerStarted","Data":"f4e59d8be4fd2e06020d6142c4ed2134d846ef4d2af6f64c5b9c2fecea1f88c6"} Feb 23 13:24:38.275771 master-0 kubenswrapper[26474]: I0223 13:24:38.275676 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ngknx" event={"ID":"e79bc2ef-593e-4f36-a1ac-c70064cb331d","Type":"ContainerStarted","Data":"14677e761428987289082c50264c5070cdf6fbe8a6b6b1dea56ddba091cdf31e"} Feb 23 13:24:38.310606 master-0 kubenswrapper[26474]: I0223 13:24:38.310087 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ngknx" podStartSLOduration=2.579864647 podStartE2EDuration="3.310063864s" podCreationTimestamp="2026-02-23 13:24:35 +0000 UTC" firstStartedPulling="2026-02-23 13:24:36.417199617 +0000 UTC m=+598.263707294" lastFinishedPulling="2026-02-23 13:24:37.147398824 +0000 UTC m=+598.993906511" observedRunningTime="2026-02-23 13:24:38.306484966 +0000 UTC m=+600.152992683" watchObservedRunningTime="2026-02-23 13:24:38.310063864 +0000 UTC m=+600.156571561" Feb 23 13:24:45.936405 master-0 kubenswrapper[26474]: I0223 13:24:45.936315 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:45.936405 master-0 kubenswrapper[26474]: I0223 13:24:45.936392 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:45.982233 master-0 kubenswrapper[26474]: I0223 13:24:45.982126 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:46.419304 master-0 kubenswrapper[26474]: I0223 13:24:46.418963 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ngknx" Feb 23 13:24:47.384443 master-0 kubenswrapper[26474]: I0223 13:24:47.384389 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8"] Feb 23 13:24:47.387643 master-0 kubenswrapper[26474]: I0223 13:24:47.387607 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.402243 master-0 kubenswrapper[26474]: I0223 13:24:47.402186 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8"] Feb 23 13:24:47.535686 master-0 kubenswrapper[26474]: I0223 13:24:47.535575 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.535919 master-0 kubenswrapper[26474]: I0223 13:24:47.535742 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4zgp\" (UniqueName: \"kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.536836 master-0 kubenswrapper[26474]: I0223 13:24:47.536774 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.638410 master-0 kubenswrapper[26474]: I0223 13:24:47.638255 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.638410 master-0 kubenswrapper[26474]: I0223 13:24:47.638367 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4zgp\" (UniqueName: \"kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.638683 master-0 kubenswrapper[26474]: I0223 13:24:47.638479 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.639135 master-0 kubenswrapper[26474]: I0223 13:24:47.639089 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.639216 master-0 kubenswrapper[26474]: I0223 13:24:47.639152 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.660799 master-0 kubenswrapper[26474]: I0223 13:24:47.660734 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4zgp\" (UniqueName: \"kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:47.725079 master-0 kubenswrapper[26474]: I0223 13:24:47.724998 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:48.250593 master-0 kubenswrapper[26474]: I0223 13:24:48.249316 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8"] Feb 23 13:24:48.381493 master-0 kubenswrapper[26474]: I0223 13:24:48.381398 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" event={"ID":"b520dde4-e5e6-48b9-ae45-e96dc19be06f","Type":"ContainerStarted","Data":"4744b26093e88e06fbee4346ccdf4896e1ab373dfbea6780c55c6c4db1978d33"} Feb 23 13:24:49.391732 master-0 kubenswrapper[26474]: I0223 13:24:49.391639 26474 generic.go:334] "Generic (PLEG): container finished" podID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerID="629f0baf474b3fd5293fd963d57f31ee60be2cdb9471a743a138530dda93c8cc" exitCode=0 Feb 23 13:24:49.391732 master-0 kubenswrapper[26474]: I0223 13:24:49.391695 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" event={"ID":"b520dde4-e5e6-48b9-ae45-e96dc19be06f","Type":"ContainerDied","Data":"629f0baf474b3fd5293fd963d57f31ee60be2cdb9471a743a138530dda93c8cc"} Feb 23 13:24:50.404192 master-0 kubenswrapper[26474]: I0223 13:24:50.404043 26474 generic.go:334] "Generic (PLEG): container finished" podID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerID="6446451529a019b023b0693c611ea43a6159e4bb1e054f76bdb2ca282fee1714" exitCode=0 Feb 23 13:24:50.404192 master-0 kubenswrapper[26474]: I0223 13:24:50.404101 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" event={"ID":"b520dde4-e5e6-48b9-ae45-e96dc19be06f","Type":"ContainerDied","Data":"6446451529a019b023b0693c611ea43a6159e4bb1e054f76bdb2ca282fee1714"} Feb 23 13:24:51.421583 master-0 kubenswrapper[26474]: I0223 13:24:51.421504 26474 generic.go:334] "Generic (PLEG): container finished" podID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerID="aee90da4f4279171007ac08d1ccb9463c870bf73b383b7de5dafce7eeeae9ce9" exitCode=0 Feb 23 13:24:51.421583 master-0 kubenswrapper[26474]: I0223 13:24:51.421562 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" event={"ID":"b520dde4-e5e6-48b9-ae45-e96dc19be06f","Type":"ContainerDied","Data":"aee90da4f4279171007ac08d1ccb9463c870bf73b383b7de5dafce7eeeae9ce9"} Feb 23 13:24:52.860112 master-0 kubenswrapper[26474]: I0223 13:24:52.860035 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:24:52.952204 master-0 kubenswrapper[26474]: I0223 13:24:52.952148 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util\") pod \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " Feb 23 13:24:52.952445 master-0 kubenswrapper[26474]: I0223 13:24:52.952277 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle\") pod \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " Feb 23 13:24:52.952445 master-0 kubenswrapper[26474]: I0223 13:24:52.952402 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4zgp\" (UniqueName: \"kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp\") pod \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\" (UID: \"b520dde4-e5e6-48b9-ae45-e96dc19be06f\") " Feb 23 13:24:52.953150 master-0 kubenswrapper[26474]: I0223 13:24:52.953129 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle" (OuterVolumeSpecName: "bundle") pod "b520dde4-e5e6-48b9-ae45-e96dc19be06f" (UID: "b520dde4-e5e6-48b9-ae45-e96dc19be06f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:52.958506 master-0 kubenswrapper[26474]: I0223 13:24:52.958402 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp" (OuterVolumeSpecName: "kube-api-access-b4zgp") pod "b520dde4-e5e6-48b9-ae45-e96dc19be06f" (UID: "b520dde4-e5e6-48b9-ae45-e96dc19be06f"). InnerVolumeSpecName "kube-api-access-b4zgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:24:52.966684 master-0 kubenswrapper[26474]: I0223 13:24:52.966647 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util" (OuterVolumeSpecName: "util") pod "b520dde4-e5e6-48b9-ae45-e96dc19be06f" (UID: "b520dde4-e5e6-48b9-ae45-e96dc19be06f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:53.053628 master-0 kubenswrapper[26474]: I0223 13:24:53.053479 26474 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:53.053628 master-0 kubenswrapper[26474]: I0223 13:24:53.053527 26474 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b520dde4-e5e6-48b9-ae45-e96dc19be06f-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:53.053628 master-0 kubenswrapper[26474]: I0223 13:24:53.053539 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4zgp\" (UniqueName: \"kubernetes.io/projected/b520dde4-e5e6-48b9-ae45-e96dc19be06f-kube-api-access-b4zgp\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:53.449792 master-0 kubenswrapper[26474]: I0223 13:24:53.449535 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" event={"ID":"b520dde4-e5e6-48b9-ae45-e96dc19be06f","Type":"ContainerDied","Data":"4744b26093e88e06fbee4346ccdf4896e1ab373dfbea6780c55c6c4db1978d33"} Feb 23 13:24:53.449792 master-0 kubenswrapper[26474]: I0223 13:24:53.449589 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4744b26093e88e06fbee4346ccdf4896e1ab373dfbea6780c55c6c4db1978d33" Feb 23 13:24:53.449792 master-0 kubenswrapper[26474]: I0223 13:24:53.449612 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8" Feb 23 13:25:00.091675 master-0 kubenswrapper[26474]: I0223 13:25:00.091601 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-7c98k"] Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: E0223 13:25:00.092048 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="extract" Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: I0223 13:25:00.092069 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="extract" Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: E0223 13:25:00.092094 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="pull" Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: I0223 13:25:00.092102 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="pull" Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: E0223 13:25:00.092125 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="util" Feb 23 13:25:00.092261 master-0 kubenswrapper[26474]: I0223 13:25:00.092134 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="util" Feb 23 13:25:00.092805 master-0 kubenswrapper[26474]: I0223 13:25:00.092409 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="b520dde4-e5e6-48b9-ae45-e96dc19be06f" containerName="extract" Feb 23 13:25:00.093183 master-0 kubenswrapper[26474]: I0223 13:25:00.093155 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:00.139164 master-0 kubenswrapper[26474]: I0223 13:25:00.139099 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-7c98k"] Feb 23 13:25:00.187691 master-0 kubenswrapper[26474]: I0223 13:25:00.187616 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcx9\" (UniqueName: \"kubernetes.io/projected/89bfef34-b109-4625-accc-069dcc323de6-kube-api-access-gtcx9\") pod \"openstack-operator-controller-init-55c649df44-7c98k\" (UID: \"89bfef34-b109-4625-accc-069dcc323de6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:00.291307 master-0 kubenswrapper[26474]: I0223 13:25:00.291237 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcx9\" (UniqueName: \"kubernetes.io/projected/89bfef34-b109-4625-accc-069dcc323de6-kube-api-access-gtcx9\") pod \"openstack-operator-controller-init-55c649df44-7c98k\" (UID: \"89bfef34-b109-4625-accc-069dcc323de6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:00.323553 master-0 kubenswrapper[26474]: I0223 13:25:00.323477 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcx9\" (UniqueName: \"kubernetes.io/projected/89bfef34-b109-4625-accc-069dcc323de6-kube-api-access-gtcx9\") pod \"openstack-operator-controller-init-55c649df44-7c98k\" (UID: \"89bfef34-b109-4625-accc-069dcc323de6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:00.412886 master-0 kubenswrapper[26474]: I0223 13:25:00.412733 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:00.857782 master-0 kubenswrapper[26474]: I0223 13:25:00.857709 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-7c98k"] Feb 23 13:25:00.865836 master-0 kubenswrapper[26474]: W0223 13:25:00.865785 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89bfef34_b109_4625_accc_069dcc323de6.slice/crio-90e9bf0d0048b9e0b190930059fc5fd15ff6f88d19d52eaac39587eeccb06bd3 WatchSource:0}: Error finding container 90e9bf0d0048b9e0b190930059fc5fd15ff6f88d19d52eaac39587eeccb06bd3: Status 404 returned error can't find the container with id 90e9bf0d0048b9e0b190930059fc5fd15ff6f88d19d52eaac39587eeccb06bd3 Feb 23 13:25:01.543331 master-0 kubenswrapper[26474]: I0223 13:25:01.543208 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" event={"ID":"89bfef34-b109-4625-accc-069dcc323de6","Type":"ContainerStarted","Data":"90e9bf0d0048b9e0b190930059fc5fd15ff6f88d19d52eaac39587eeccb06bd3"} Feb 23 13:25:05.590250 master-0 kubenswrapper[26474]: I0223 13:25:05.590151 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" event={"ID":"89bfef34-b109-4625-accc-069dcc323de6","Type":"ContainerStarted","Data":"c7b6e0d70fe7d73e2badc3cf23ab7399a649690efb764026ada09362ad693a28"} Feb 23 13:25:05.590901 master-0 kubenswrapper[26474]: I0223 13:25:05.590829 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:05.628421 master-0 kubenswrapper[26474]: I0223 13:25:05.628272 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" podStartSLOduration=2.008286518 podStartE2EDuration="5.628242841s" podCreationTimestamp="2026-02-23 13:25:00 +0000 UTC" firstStartedPulling="2026-02-23 13:25:00.868371268 +0000 UTC m=+622.714878935" lastFinishedPulling="2026-02-23 13:25:04.488327501 +0000 UTC m=+626.334835258" observedRunningTime="2026-02-23 13:25:05.625421183 +0000 UTC m=+627.471928870" watchObservedRunningTime="2026-02-23 13:25:05.628242841 +0000 UTC m=+627.474750518" Feb 23 13:25:10.417236 master-0 kubenswrapper[26474]: I0223 13:25:10.417179 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55c649df44-7c98k" Feb 23 13:25:30.342064 master-0 kubenswrapper[26474]: I0223 13:25:30.341987 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq"] Feb 23 13:25:30.346359 master-0 kubenswrapper[26474]: I0223 13:25:30.343857 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:30.365326 master-0 kubenswrapper[26474]: I0223 13:25:30.365244 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6"] Feb 23 13:25:30.367045 master-0 kubenswrapper[26474]: I0223 13:25:30.367010 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:30.379546 master-0 kubenswrapper[26474]: I0223 13:25:30.379460 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcqjm\" (UniqueName: \"kubernetes.io/projected/22f5345b-e9af-4977-b60c-bd8c78e5dbf5-kube-api-access-lcqjm\") pod \"cinder-operator-controller-manager-55d77d7b5c-67nk6\" (UID: \"22f5345b-e9af-4977-b60c-bd8c78e5dbf5\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:30.380180 master-0 kubenswrapper[26474]: I0223 13:25:30.379951 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75bcd\" (UniqueName: \"kubernetes.io/projected/8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4-kube-api-access-75bcd\") pod \"barbican-operator-controller-manager-868647ff47-jj5xq\" (UID: \"8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:30.438865 master-0 kubenswrapper[26474]: I0223 13:25:30.438196 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq"] Feb 23 13:25:30.438865 master-0 kubenswrapper[26474]: I0223 13:25:30.438268 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb"] Feb 23 13:25:30.439824 master-0 kubenswrapper[26474]: I0223 13:25:30.439763 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:30.452821 master-0 kubenswrapper[26474]: I0223 13:25:30.452725 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6"] Feb 23 13:25:30.485859 master-0 kubenswrapper[26474]: I0223 13:25:30.485787 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75bcd\" (UniqueName: \"kubernetes.io/projected/8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4-kube-api-access-75bcd\") pod \"barbican-operator-controller-manager-868647ff47-jj5xq\" (UID: \"8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:30.486114 master-0 kubenswrapper[26474]: I0223 13:25:30.486095 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bdj\" (UniqueName: \"kubernetes.io/projected/13a0dd7d-3565-428b-b1f6-75a3956c808b-kube-api-access-w4bdj\") pod \"designate-operator-controller-manager-6d8bf5c495-v6gfb\" (UID: \"13a0dd7d-3565-428b-b1f6-75a3956c808b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:30.486251 master-0 kubenswrapper[26474]: I0223 13:25:30.486226 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcqjm\" (UniqueName: \"kubernetes.io/projected/22f5345b-e9af-4977-b60c-bd8c78e5dbf5-kube-api-access-lcqjm\") pod \"cinder-operator-controller-manager-55d77d7b5c-67nk6\" (UID: \"22f5345b-e9af-4977-b60c-bd8c78e5dbf5\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:30.491747 master-0 kubenswrapper[26474]: I0223 13:25:30.491527 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb"] Feb 23 13:25:30.542644 master-0 kubenswrapper[26474]: I0223 13:25:30.534887 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75bcd\" (UniqueName: \"kubernetes.io/projected/8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4-kube-api-access-75bcd\") pod \"barbican-operator-controller-manager-868647ff47-jj5xq\" (UID: \"8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:30.543515 master-0 kubenswrapper[26474]: I0223 13:25:30.543453 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcqjm\" (UniqueName: \"kubernetes.io/projected/22f5345b-e9af-4977-b60c-bd8c78e5dbf5-kube-api-access-lcqjm\") pod \"cinder-operator-controller-manager-55d77d7b5c-67nk6\" (UID: \"22f5345b-e9af-4977-b60c-bd8c78e5dbf5\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:30.543676 master-0 kubenswrapper[26474]: I0223 13:25:30.543482 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5"] Feb 23 13:25:30.549158 master-0 kubenswrapper[26474]: I0223 13:25:30.549111 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:30.593651 master-0 kubenswrapper[26474]: I0223 13:25:30.588475 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbb2n\" (UniqueName: \"kubernetes.io/projected/8904511a-508f-4a31-a4b0-15cb665eeb6d-kube-api-access-cbb2n\") pod \"glance-operator-controller-manager-784b5bb6c5-xbnp5\" (UID: \"8904511a-508f-4a31-a4b0-15cb665eeb6d\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:30.593651 master-0 kubenswrapper[26474]: I0223 13:25:30.588624 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bdj\" (UniqueName: \"kubernetes.io/projected/13a0dd7d-3565-428b-b1f6-75a3956c808b-kube-api-access-w4bdj\") pod \"designate-operator-controller-manager-6d8bf5c495-v6gfb\" (UID: \"13a0dd7d-3565-428b-b1f6-75a3956c808b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:30.593651 master-0 kubenswrapper[26474]: I0223 13:25:30.592409 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2"] Feb 23 13:25:30.594134 master-0 kubenswrapper[26474]: I0223 13:25:30.593864 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:30.621300 master-0 kubenswrapper[26474]: I0223 13:25:30.620934 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bdj\" (UniqueName: \"kubernetes.io/projected/13a0dd7d-3565-428b-b1f6-75a3956c808b-kube-api-access-w4bdj\") pod \"designate-operator-controller-manager-6d8bf5c495-v6gfb\" (UID: \"13a0dd7d-3565-428b-b1f6-75a3956c808b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:30.691520 master-0 kubenswrapper[26474]: I0223 13:25:30.691459 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5"] Feb 23 13:25:30.691951 master-0 kubenswrapper[26474]: I0223 13:25:30.691926 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbb2n\" (UniqueName: \"kubernetes.io/projected/8904511a-508f-4a31-a4b0-15cb665eeb6d-kube-api-access-cbb2n\") pod \"glance-operator-controller-manager-784b5bb6c5-xbnp5\" (UID: \"8904511a-508f-4a31-a4b0-15cb665eeb6d\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:30.692033 master-0 kubenswrapper[26474]: I0223 13:25:30.691981 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rf9\" (UniqueName: \"kubernetes.io/projected/0a34c409-b181-4673-8983-0195a142c22d-kube-api-access-92rf9\") pod \"heat-operator-controller-manager-69f49c598c-kn7n2\" (UID: \"0a34c409-b181-4673-8983-0195a142c22d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:30.706142 master-0 kubenswrapper[26474]: I0223 13:25:30.706021 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:30.740019 master-0 kubenswrapper[26474]: I0223 13:25:30.738594 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:30.750635 master-0 kubenswrapper[26474]: I0223 13:25:30.750558 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbb2n\" (UniqueName: \"kubernetes.io/projected/8904511a-508f-4a31-a4b0-15cb665eeb6d-kube-api-access-cbb2n\") pod \"glance-operator-controller-manager-784b5bb6c5-xbnp5\" (UID: \"8904511a-508f-4a31-a4b0-15cb665eeb6d\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:30.764803 master-0 kubenswrapper[26474]: I0223 13:25:30.757462 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2"] Feb 23 13:25:30.795507 master-0 kubenswrapper[26474]: I0223 13:25:30.795446 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rf9\" (UniqueName: \"kubernetes.io/projected/0a34c409-b181-4673-8983-0195a142c22d-kube-api-access-92rf9\") pod \"heat-operator-controller-manager-69f49c598c-kn7n2\" (UID: \"0a34c409-b181-4673-8983-0195a142c22d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:30.798415 master-0 kubenswrapper[26474]: I0223 13:25:30.798362 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs"] Feb 23 13:25:30.812084 master-0 kubenswrapper[26474]: I0223 13:25:30.804004 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:30.816058 master-0 kubenswrapper[26474]: I0223 13:25:30.816005 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:30.828204 master-0 kubenswrapper[26474]: I0223 13:25:30.828160 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rf9\" (UniqueName: \"kubernetes.io/projected/0a34c409-b181-4673-8983-0195a142c22d-kube-api-access-92rf9\") pod \"heat-operator-controller-manager-69f49c598c-kn7n2\" (UID: \"0a34c409-b181-4673-8983-0195a142c22d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:30.830193 master-0 kubenswrapper[26474]: I0223 13:25:30.829740 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs"] Feb 23 13:25:30.909610 master-0 kubenswrapper[26474]: I0223 13:25:30.900778 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqp8s\" (UniqueName: \"kubernetes.io/projected/6dc3991b-f1a6-436f-aa4c-19d6e5e1d376-kube-api-access-tqp8s\") pod \"horizon-operator-controller-manager-5b9b8895d5-fphhs\" (UID: \"6dc3991b-f1a6-436f-aa4c-19d6e5e1d376\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:30.909610 master-0 kubenswrapper[26474]: I0223 13:25:30.904781 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm"] Feb 23 13:25:30.909610 master-0 kubenswrapper[26474]: I0223 13:25:30.906175 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:30.911455 master-0 kubenswrapper[26474]: I0223 13:25:30.910107 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 13:25:30.915167 master-0 kubenswrapper[26474]: I0223 13:25:30.915119 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm"] Feb 23 13:25:30.924655 master-0 kubenswrapper[26474]: I0223 13:25:30.924590 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84"] Feb 23 13:25:30.926056 master-0 kubenswrapper[26474]: I0223 13:25:30.926024 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:30.955085 master-0 kubenswrapper[26474]: I0223 13:25:30.954857 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84"] Feb 23 13:25:30.975462 master-0 kubenswrapper[26474]: I0223 13:25:30.974670 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t"] Feb 23 13:25:30.976069 master-0 kubenswrapper[26474]: I0223 13:25:30.976048 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:30.980969 master-0 kubenswrapper[26474]: I0223 13:25:30.980923 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:31.004519 master-0 kubenswrapper[26474]: I0223 13:25:31.004333 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.008234 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2tv\" (UniqueName: \"kubernetes.io/projected/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-kube-api-access-8f2tv\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.008290 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.008378 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqp8s\" (UniqueName: \"kubernetes.io/projected/6dc3991b-f1a6-436f-aa4c-19d6e5e1d376-kube-api-access-tqp8s\") pod \"horizon-operator-controller-manager-5b9b8895d5-fphhs\" (UID: \"6dc3991b-f1a6-436f-aa4c-19d6e5e1d376\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.008417 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzrr\" (UniqueName: \"kubernetes.io/projected/6bcc1ece-0c36-4593-9134-31adc6f5b6e3-kube-api-access-rlzrr\") pod \"ironic-operator-controller-manager-554564d7fc-h6q84\" (UID: \"6bcc1ece-0c36-4593-9134-31adc6f5b6e3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.008475 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqxq\" (UniqueName: \"kubernetes.io/projected/2e3f02c0-ac53-4ed1-a735-c46648724b7c-kube-api-access-pdqxq\") pod \"keystone-operator-controller-manager-b4d948c87-2tm5t\" (UID: \"2e3f02c0-ac53-4ed1-a735-c46648724b7c\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:31.009432 master-0 kubenswrapper[26474]: I0223 13:25:31.009079 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t"] Feb 23 13:25:31.033482 master-0 kubenswrapper[26474]: I0223 13:25:31.033440 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6"] Feb 23 13:25:31.033993 master-0 kubenswrapper[26474]: I0223 13:25:31.033964 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqp8s\" (UniqueName: \"kubernetes.io/projected/6dc3991b-f1a6-436f-aa4c-19d6e5e1d376-kube-api-access-tqp8s\") pod \"horizon-operator-controller-manager-5b9b8895d5-fphhs\" (UID: \"6dc3991b-f1a6-436f-aa4c-19d6e5e1d376\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:31.040353 master-0 kubenswrapper[26474]: I0223 13:25:31.035176 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:31.058761 master-0 kubenswrapper[26474]: I0223 13:25:31.058632 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6"] Feb 23 13:25:31.095497 master-0 kubenswrapper[26474]: I0223 13:25:31.091247 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb"] Feb 23 13:25:31.095497 master-0 kubenswrapper[26474]: I0223 13:25:31.092696 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:31.110811 master-0 kubenswrapper[26474]: I0223 13:25:31.110740 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2tv\" (UniqueName: \"kubernetes.io/projected/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-kube-api-access-8f2tv\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.110980 master-0 kubenswrapper[26474]: I0223 13:25:31.110821 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.111064 master-0 kubenswrapper[26474]: E0223 13:25:31.111019 26474 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:31.111130 master-0 kubenswrapper[26474]: E0223 13:25:31.111111 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert podName:6ad5cdc1-3784-4521-9f4f-e3f3877f8c31 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:31.611087573 +0000 UTC m=+653.457595250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert") pod "infra-operator-controller-manager-5f879c76b6-q5tzm" (UID: "6ad5cdc1-3784-4521-9f4f-e3f3877f8c31") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:31.112704 master-0 kubenswrapper[26474]: I0223 13:25:31.112673 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzrr\" (UniqueName: \"kubernetes.io/projected/6bcc1ece-0c36-4593-9134-31adc6f5b6e3-kube-api-access-rlzrr\") pod \"ironic-operator-controller-manager-554564d7fc-h6q84\" (UID: \"6bcc1ece-0c36-4593-9134-31adc6f5b6e3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:31.112768 master-0 kubenswrapper[26474]: I0223 13:25:31.112738 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdqxq\" (UniqueName: \"kubernetes.io/projected/2e3f02c0-ac53-4ed1-a735-c46648724b7c-kube-api-access-pdqxq\") pod \"keystone-operator-controller-manager-b4d948c87-2tm5t\" (UID: \"2e3f02c0-ac53-4ed1-a735-c46648724b7c\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:31.123869 master-0 kubenswrapper[26474]: I0223 13:25:31.117617 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb"] Feb 23 13:25:31.123869 master-0 kubenswrapper[26474]: I0223 13:25:31.122974 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb"] Feb 23 13:25:31.124319 master-0 kubenswrapper[26474]: I0223 13:25:31.124293 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:31.138192 master-0 kubenswrapper[26474]: I0223 13:25:31.138132 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdqxq\" (UniqueName: \"kubernetes.io/projected/2e3f02c0-ac53-4ed1-a735-c46648724b7c-kube-api-access-pdqxq\") pod \"keystone-operator-controller-manager-b4d948c87-2tm5t\" (UID: \"2e3f02c0-ac53-4ed1-a735-c46648724b7c\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:31.140727 master-0 kubenswrapper[26474]: I0223 13:25:31.140666 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2tv\" (UniqueName: \"kubernetes.io/projected/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-kube-api-access-8f2tv\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.151178 master-0 kubenswrapper[26474]: I0223 13:25:31.144574 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv"] Feb 23 13:25:31.151178 master-0 kubenswrapper[26474]: I0223 13:25:31.145800 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:31.156438 master-0 kubenswrapper[26474]: I0223 13:25:31.156307 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzrr\" (UniqueName: \"kubernetes.io/projected/6bcc1ece-0c36-4593-9134-31adc6f5b6e3-kube-api-access-rlzrr\") pod \"ironic-operator-controller-manager-554564d7fc-h6q84\" (UID: \"6bcc1ece-0c36-4593-9134-31adc6f5b6e3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:31.163363 master-0 kubenswrapper[26474]: I0223 13:25:31.163284 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb"] Feb 23 13:25:31.175508 master-0 kubenswrapper[26474]: I0223 13:25:31.175436 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:31.179063 master-0 kubenswrapper[26474]: I0223 13:25:31.179005 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv"] Feb 23 13:25:31.189982 master-0 kubenswrapper[26474]: I0223 13:25:31.189927 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff"] Feb 23 13:25:31.191583 master-0 kubenswrapper[26474]: I0223 13:25:31.191546 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:31.203370 master-0 kubenswrapper[26474]: I0223 13:25:31.200483 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff"] Feb 23 13:25:31.217266 master-0 kubenswrapper[26474]: I0223 13:25:31.210955 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6"] Feb 23 13:25:31.217586 master-0 kubenswrapper[26474]: I0223 13:25:31.217332 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446zs\" (UniqueName: \"kubernetes.io/projected/68e3868d-86e1-4564-84ec-e290e9ac1aa7-kube-api-access-446zs\") pod \"mariadb-operator-controller-manager-6994f66f48-z98lb\" (UID: \"68e3868d-86e1-4564-84ec-e290e9ac1aa7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:31.217586 master-0 kubenswrapper[26474]: I0223 13:25:31.217536 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7lv\" (UniqueName: \"kubernetes.io/projected/130ccb2a-cc03-4cb7-a439-fc8769a64b63-kube-api-access-hs7lv\") pod \"manila-operator-controller-manager-67d996989d-g5pv6\" (UID: \"130ccb2a-cc03-4cb7-a439-fc8769a64b63\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:31.219662 master-0 kubenswrapper[26474]: I0223 13:25:31.219499 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.233577 master-0 kubenswrapper[26474]: I0223 13:25:31.223827 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 13:25:31.233577 master-0 kubenswrapper[26474]: I0223 13:25:31.231154 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6"] Feb 23 13:25:31.242970 master-0 kubenswrapper[26474]: I0223 13:25:31.235945 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:31.295535 master-0 kubenswrapper[26474]: I0223 13:25:31.295444 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:31.297275 master-0 kubenswrapper[26474]: I0223 13:25:31.297171 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6"] Feb 23 13:25:31.306612 master-0 kubenswrapper[26474]: I0223 13:25:31.304395 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:31.314925 master-0 kubenswrapper[26474]: I0223 13:25:31.314728 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7"] Feb 23 13:25:31.317362 master-0 kubenswrapper[26474]: I0223 13:25:31.317291 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:31.335139 master-0 kubenswrapper[26474]: I0223 13:25:31.335079 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6"] Feb 23 13:25:31.345222 master-0 kubenswrapper[26474]: I0223 13:25:31.344647 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.348747 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvzk\" (UniqueName: \"kubernetes.io/projected/51fe69f3-87d3-4318-87af-cb2cc650c102-kube-api-access-lvvzk\") pod \"ovn-operator-controller-manager-5955d8c787-w2ql6\" (UID: \"51fe69f3-87d3-4318-87af-cb2cc650c102\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349046 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwj94\" (UniqueName: \"kubernetes.io/projected/8f0efba8-d92e-48b3-affc-1f155052edeb-kube-api-access-qwj94\") pod \"neutron-operator-controller-manager-6bd4687957-wwjlb\" (UID: \"8f0efba8-d92e-48b3-affc-1f155052edeb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349099 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ngp\" (UniqueName: \"kubernetes.io/projected/81d4eac4-5e91-43b6-9dcd-485fd51b32da-kube-api-access-g7ngp\") pod \"octavia-operator-controller-manager-659dc6bbfc-k85ff\" (UID: \"81d4eac4-5e91-43b6-9dcd-485fd51b32da\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349144 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7lv\" (UniqueName: \"kubernetes.io/projected/130ccb2a-cc03-4cb7-a439-fc8769a64b63-kube-api-access-hs7lv\") pod \"manila-operator-controller-manager-67d996989d-g5pv6\" (UID: \"130ccb2a-cc03-4cb7-a439-fc8769a64b63\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349172 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gffn\" (UniqueName: \"kubernetes.io/projected/bce90f86-6940-4838-8b91-09eccef0ada1-kube-api-access-7gffn\") pod \"nova-operator-controller-manager-567668f5cf-vrkfv\" (UID: \"bce90f86-6940-4838-8b91-09eccef0ada1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349544 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvw2g\" (UniqueName: \"kubernetes.io/projected/50c65748-710a-45bf-b87c-da417b425d24-kube-api-access-jvw2g\") pod \"placement-operator-controller-manager-8497b45c89-22lb7\" (UID: \"50c65748-710a-45bf-b87c-da417b425d24\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.349585 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmvq\" (UniqueName: \"kubernetes.io/projected/e583a795-8ab8-4cb3-ab87-770e147b4fcd-kube-api-access-8dmvq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.356677 master-0 kubenswrapper[26474]: I0223 13:25:31.350518 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446zs\" (UniqueName: \"kubernetes.io/projected/68e3868d-86e1-4564-84ec-e290e9ac1aa7-kube-api-access-446zs\") pod \"mariadb-operator-controller-manager-6994f66f48-z98lb\" (UID: \"68e3868d-86e1-4564-84ec-e290e9ac1aa7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:31.359158 master-0 kubenswrapper[26474]: I0223 13:25:31.357725 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b474t"] Feb 23 13:25:31.359981 master-0 kubenswrapper[26474]: I0223 13:25:31.359741 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:31.367927 master-0 kubenswrapper[26474]: I0223 13:25:31.367861 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7"] Feb 23 13:25:31.372223 master-0 kubenswrapper[26474]: I0223 13:25:31.372166 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446zs\" (UniqueName: \"kubernetes.io/projected/68e3868d-86e1-4564-84ec-e290e9ac1aa7-kube-api-access-446zs\") pod \"mariadb-operator-controller-manager-6994f66f48-z98lb\" (UID: \"68e3868d-86e1-4564-84ec-e290e9ac1aa7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:31.375635 master-0 kubenswrapper[26474]: I0223 13:25:31.374962 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7lv\" (UniqueName: \"kubernetes.io/projected/130ccb2a-cc03-4cb7-a439-fc8769a64b63-kube-api-access-hs7lv\") pod \"manila-operator-controller-manager-67d996989d-g5pv6\" (UID: \"130ccb2a-cc03-4cb7-a439-fc8769a64b63\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:31.382409 master-0 kubenswrapper[26474]: I0223 13:25:31.376726 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b474t"] Feb 23 13:25:31.386455 master-0 kubenswrapper[26474]: I0223 13:25:31.386256 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn"] Feb 23 13:25:31.388168 master-0 kubenswrapper[26474]: I0223 13:25:31.388141 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:31.394135 master-0 kubenswrapper[26474]: I0223 13:25:31.391490 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:31.399056 master-0 kubenswrapper[26474]: I0223 13:25:31.398997 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn"] Feb 23 13:25:31.421694 master-0 kubenswrapper[26474]: I0223 13:25:31.421602 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67"] Feb 23 13:25:31.427068 master-0 kubenswrapper[26474]: I0223 13:25:31.426994 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:31.430932 master-0 kubenswrapper[26474]: I0223 13:25:31.428115 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:31.439879 master-0 kubenswrapper[26474]: I0223 13:25:31.432522 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67"] Feb 23 13:25:31.447871 master-0 kubenswrapper[26474]: I0223 13:25:31.447808 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl"] Feb 23 13:25:31.451407 master-0 kubenswrapper[26474]: I0223 13:25:31.451328 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:31.452415 master-0 kubenswrapper[26474]: I0223 13:25:31.452355 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ngp\" (UniqueName: \"kubernetes.io/projected/81d4eac4-5e91-43b6-9dcd-485fd51b32da-kube-api-access-g7ngp\") pod \"octavia-operator-controller-manager-659dc6bbfc-k85ff\" (UID: \"81d4eac4-5e91-43b6-9dcd-485fd51b32da\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:31.452498 master-0 kubenswrapper[26474]: I0223 13:25:31.452447 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gffn\" (UniqueName: \"kubernetes.io/projected/bce90f86-6940-4838-8b91-09eccef0ada1-kube-api-access-7gffn\") pod \"nova-operator-controller-manager-567668f5cf-vrkfv\" (UID: \"bce90f86-6940-4838-8b91-09eccef0ada1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:31.452575 master-0 kubenswrapper[26474]: I0223 13:25:31.452545 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvw2g\" (UniqueName: \"kubernetes.io/projected/50c65748-710a-45bf-b87c-da417b425d24-kube-api-access-jvw2g\") pod \"placement-operator-controller-manager-8497b45c89-22lb7\" (UID: \"50c65748-710a-45bf-b87c-da417b425d24\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:31.452620 master-0 kubenswrapper[26474]: I0223 13:25:31.452586 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmvq\" (UniqueName: \"kubernetes.io/projected/e583a795-8ab8-4cb3-ab87-770e147b4fcd-kube-api-access-8dmvq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.452684 master-0 kubenswrapper[26474]: I0223 13:25:31.452669 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrkc\" (UniqueName: \"kubernetes.io/projected/1b3fd8f4-0323-4cf3-a20c-94ffd694f226-kube-api-access-twrkc\") pod \"telemetry-operator-controller-manager-589c568786-rbgvn\" (UID: \"1b3fd8f4-0323-4cf3-a20c-94ffd694f226\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:31.452741 master-0 kubenswrapper[26474]: I0223 13:25:31.452725 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.452778 master-0 kubenswrapper[26474]: I0223 13:25:31.452764 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z78w\" (UniqueName: \"kubernetes.io/projected/2522d9b4-e710-4c6c-babe-50b15608f82f-kube-api-access-7z78w\") pod \"test-operator-controller-manager-5dc6794d5b-nrq67\" (UID: \"2522d9b4-e710-4c6c-babe-50b15608f82f\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:31.452845 master-0 kubenswrapper[26474]: I0223 13:25:31.452827 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvzk\" (UniqueName: \"kubernetes.io/projected/51fe69f3-87d3-4318-87af-cb2cc650c102-kube-api-access-lvvzk\") pod \"ovn-operator-controller-manager-5955d8c787-w2ql6\" (UID: \"51fe69f3-87d3-4318-87af-cb2cc650c102\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:31.452880 master-0 kubenswrapper[26474]: I0223 13:25:31.452857 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjch\" (UniqueName: \"kubernetes.io/projected/a98204b6-f613-417e-a067-f418edda4d8f-kube-api-access-tdjch\") pod \"swift-operator-controller-manager-68f46476f-b474t\" (UID: \"a98204b6-f613-417e-a067-f418edda4d8f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:31.452923 master-0 kubenswrapper[26474]: I0223 13:25:31.452904 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwj94\" (UniqueName: \"kubernetes.io/projected/8f0efba8-d92e-48b3-affc-1f155052edeb-kube-api-access-qwj94\") pod \"neutron-operator-controller-manager-6bd4687957-wwjlb\" (UID: \"8f0efba8-d92e-48b3-affc-1f155052edeb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:31.458052 master-0 kubenswrapper[26474]: E0223 13:25:31.454746 26474 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:31.458052 master-0 kubenswrapper[26474]: E0223 13:25:31.454796 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert podName:e583a795-8ab8-4cb3-ab87-770e147b4fcd nodeName:}" failed. No retries permitted until 2026-02-23 13:25:31.954781349 +0000 UTC m=+653.801289026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" (UID: "e583a795-8ab8-4cb3-ab87-770e147b4fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.473669 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwj94\" (UniqueName: \"kubernetes.io/projected/8f0efba8-d92e-48b3-affc-1f155052edeb-kube-api-access-qwj94\") pod \"neutron-operator-controller-manager-6bd4687957-wwjlb\" (UID: \"8f0efba8-d92e-48b3-affc-1f155052edeb\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.475321 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvw2g\" (UniqueName: \"kubernetes.io/projected/50c65748-710a-45bf-b87c-da417b425d24-kube-api-access-jvw2g\") pod \"placement-operator-controller-manager-8497b45c89-22lb7\" (UID: \"50c65748-710a-45bf-b87c-da417b425d24\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.477706 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.478376 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvzk\" (UniqueName: \"kubernetes.io/projected/51fe69f3-87d3-4318-87af-cb2cc650c102-kube-api-access-lvvzk\") pod \"ovn-operator-controller-manager-5955d8c787-w2ql6\" (UID: \"51fe69f3-87d3-4318-87af-cb2cc650c102\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.478725 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmvq\" (UniqueName: \"kubernetes.io/projected/e583a795-8ab8-4cb3-ab87-770e147b4fcd-kube-api-access-8dmvq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:31.479006 master-0 kubenswrapper[26474]: I0223 13:25:31.478743 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ngp\" (UniqueName: \"kubernetes.io/projected/81d4eac4-5e91-43b6-9dcd-485fd51b32da-kube-api-access-g7ngp\") pod \"octavia-operator-controller-manager-659dc6bbfc-k85ff\" (UID: \"81d4eac4-5e91-43b6-9dcd-485fd51b32da\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:31.484698 master-0 kubenswrapper[26474]: I0223 13:25:31.484641 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gffn\" (UniqueName: \"kubernetes.io/projected/bce90f86-6940-4838-8b91-09eccef0ada1-kube-api-access-7gffn\") pod \"nova-operator-controller-manager-567668f5cf-vrkfv\" (UID: \"bce90f86-6940-4838-8b91-09eccef0ada1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:31.497887 master-0 kubenswrapper[26474]: I0223 13:25:31.497844 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:31.501522 master-0 kubenswrapper[26474]: I0223 13:25:31.501437 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl"] Feb 23 13:25:31.528560 master-0 kubenswrapper[26474]: I0223 13:25:31.525405 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:31.558300 master-0 kubenswrapper[26474]: I0223 13:25:31.555239 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z78w\" (UniqueName: \"kubernetes.io/projected/2522d9b4-e710-4c6c-babe-50b15608f82f-kube-api-access-7z78w\") pod \"test-operator-controller-manager-5dc6794d5b-nrq67\" (UID: \"2522d9b4-e710-4c6c-babe-50b15608f82f\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:31.559959 master-0 kubenswrapper[26474]: I0223 13:25:31.558724 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjch\" (UniqueName: \"kubernetes.io/projected/a98204b6-f613-417e-a067-f418edda4d8f-kube-api-access-tdjch\") pod \"swift-operator-controller-manager-68f46476f-b474t\" (UID: \"a98204b6-f613-417e-a067-f418edda4d8f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:31.559959 master-0 kubenswrapper[26474]: I0223 13:25:31.559132 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrkc\" (UniqueName: \"kubernetes.io/projected/1b3fd8f4-0323-4cf3-a20c-94ffd694f226-kube-api-access-twrkc\") pod \"telemetry-operator-controller-manager-589c568786-rbgvn\" (UID: \"1b3fd8f4-0323-4cf3-a20c-94ffd694f226\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:31.559959 master-0 kubenswrapper[26474]: I0223 13:25:31.559281 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dqn\" (UniqueName: \"kubernetes.io/projected/62f3d897-cddf-4020-ac9e-fe028bf95c21-kube-api-access-s6dqn\") pod \"watcher-operator-controller-manager-bccc79885-vpgcl\" (UID: \"62f3d897-cddf-4020-ac9e-fe028bf95c21\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:31.587694 master-0 kubenswrapper[26474]: I0223 13:25:31.583329 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z78w\" (UniqueName: \"kubernetes.io/projected/2522d9b4-e710-4c6c-babe-50b15608f82f-kube-api-access-7z78w\") pod \"test-operator-controller-manager-5dc6794d5b-nrq67\" (UID: \"2522d9b4-e710-4c6c-babe-50b15608f82f\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:31.589752 master-0 kubenswrapper[26474]: I0223 13:25:31.589654 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjch\" (UniqueName: \"kubernetes.io/projected/a98204b6-f613-417e-a067-f418edda4d8f-kube-api-access-tdjch\") pod \"swift-operator-controller-manager-68f46476f-b474t\" (UID: \"a98204b6-f613-417e-a067-f418edda4d8f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:31.599590 master-0 kubenswrapper[26474]: I0223 13:25:31.599187 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrkc\" (UniqueName: \"kubernetes.io/projected/1b3fd8f4-0323-4cf3-a20c-94ffd694f226-kube-api-access-twrkc\") pod \"telemetry-operator-controller-manager-589c568786-rbgvn\" (UID: \"1b3fd8f4-0323-4cf3-a20c-94ffd694f226\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:31.609851 master-0 kubenswrapper[26474]: I0223 13:25:31.609737 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb"] Feb 23 13:25:31.621604 master-0 kubenswrapper[26474]: I0223 13:25:31.621547 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.632065 master-0 kubenswrapper[26474]: I0223 13:25:31.632007 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 13:25:31.632271 master-0 kubenswrapper[26474]: I0223 13:25:31.632252 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 13:25:31.633139 master-0 kubenswrapper[26474]: I0223 13:25:31.633109 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb"] Feb 23 13:25:31.639422 master-0 kubenswrapper[26474]: I0223 13:25:31.638943 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:31.661025 master-0 kubenswrapper[26474]: I0223 13:25:31.660945 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:31.661265 master-0 kubenswrapper[26474]: I0223 13:25:31.661071 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dqn\" (UniqueName: \"kubernetes.io/projected/62f3d897-cddf-4020-ac9e-fe028bf95c21-kube-api-access-s6dqn\") pod \"watcher-operator-controller-manager-bccc79885-vpgcl\" (UID: \"62f3d897-cddf-4020-ac9e-fe028bf95c21\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:31.661265 master-0 kubenswrapper[26474]: I0223 13:25:31.661129 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.661265 master-0 kubenswrapper[26474]: I0223 13:25:31.661156 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.661723 master-0 kubenswrapper[26474]: E0223 13:25:31.661501 26474 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:31.661723 master-0 kubenswrapper[26474]: E0223 13:25:31.661612 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert podName:6ad5cdc1-3784-4521-9f4f-e3f3877f8c31 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:32.661584012 +0000 UTC m=+654.508091859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert") pod "infra-operator-controller-manager-5f879c76b6-q5tzm" (UID: "6ad5cdc1-3784-4521-9f4f-e3f3877f8c31") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:31.662024 master-0 kubenswrapper[26474]: I0223 13:25:31.661987 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jk28\" (UniqueName: \"kubernetes.io/projected/340bb764-ee68-42e8-81da-a6eb1790da92-kube-api-access-9jk28\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.681834 master-0 kubenswrapper[26474]: I0223 13:25:31.681759 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:31.683275 master-0 kubenswrapper[26474]: I0223 13:25:31.682808 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9"] Feb 23 13:25:31.684504 master-0 kubenswrapper[26474]: I0223 13:25:31.684472 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" Feb 23 13:25:31.692747 master-0 kubenswrapper[26474]: I0223 13:25:31.692663 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dqn\" (UniqueName: \"kubernetes.io/projected/62f3d897-cddf-4020-ac9e-fe028bf95c21-kube-api-access-s6dqn\") pod \"watcher-operator-controller-manager-bccc79885-vpgcl\" (UID: \"62f3d897-cddf-4020-ac9e-fe028bf95c21\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:31.696121 master-0 kubenswrapper[26474]: I0223 13:25:31.695527 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:31.710721 master-0 kubenswrapper[26474]: I0223 13:25:31.707524 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9"] Feb 23 13:25:31.714170 master-0 kubenswrapper[26474]: I0223 13:25:31.714067 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:31.769590 master-0 kubenswrapper[26474]: I0223 13:25:31.769498 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: I0223 13:25:31.770850 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcdtc\" (UniqueName: \"kubernetes.io/projected/5674fd74-f83f-4fff-8274-02567d473982-kube-api-access-hcdtc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4cbz9\" (UID: \"5674fd74-f83f-4fff-8274-02567d473982\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: I0223 13:25:31.770932 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jk28\" (UniqueName: \"kubernetes.io/projected/340bb764-ee68-42e8-81da-a6eb1790da92-kube-api-access-9jk28\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: I0223 13:25:31.771057 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: I0223 13:25:31.771085 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: E0223 13:25:31.771192 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: E0223 13:25:31.771240 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:32.271224186 +0000 UTC m=+654.117731863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: E0223 13:25:31.771479 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:31.771688 master-0 kubenswrapper[26474]: E0223 13:25:31.771529 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:32.271512612 +0000 UTC m=+654.118020289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:31.784466 master-0 kubenswrapper[26474]: I0223 13:25:31.783205 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq"] Feb 23 13:25:31.794053 master-0 kubenswrapper[26474]: I0223 13:25:31.793927 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:31.794417 master-0 kubenswrapper[26474]: I0223 13:25:31.794387 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jk28\" (UniqueName: \"kubernetes.io/projected/340bb764-ee68-42e8-81da-a6eb1790da92-kube-api-access-9jk28\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:31.850623 master-0 kubenswrapper[26474]: I0223 13:25:31.850535 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" event={"ID":"22f5345b-e9af-4977-b60c-bd8c78e5dbf5","Type":"ContainerStarted","Data":"8a985620531688c5117869f7dee62543c0a6c8476a171ce8503e3488fae4917a"} Feb 23 13:25:31.851946 master-0 kubenswrapper[26474]: I0223 13:25:31.851887 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" event={"ID":"8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4","Type":"ContainerStarted","Data":"4979fbed6fb6b40e070dcfc1c3848b482da79bafcfb6a5eb5bcf65a4366c5b65"} Feb 23 13:25:31.860143 master-0 kubenswrapper[26474]: I0223 13:25:31.860055 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" event={"ID":"13a0dd7d-3565-428b-b1f6-75a3956c808b","Type":"ContainerStarted","Data":"87ff4e07d9461016502bc96daa9926f2d345f5a8116fd25b7d17ee1a1660c1f7"} Feb 23 13:25:31.860670 master-0 kubenswrapper[26474]: I0223 13:25:31.860600 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6"] Feb 23 13:25:31.906277 master-0 kubenswrapper[26474]: I0223 13:25:31.906037 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcdtc\" (UniqueName: \"kubernetes.io/projected/5674fd74-f83f-4fff-8274-02567d473982-kube-api-access-hcdtc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4cbz9\" (UID: \"5674fd74-f83f-4fff-8274-02567d473982\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" Feb 23 13:25:31.933082 master-0 kubenswrapper[26474]: I0223 13:25:31.932592 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcdtc\" (UniqueName: \"kubernetes.io/projected/5674fd74-f83f-4fff-8274-02567d473982-kube-api-access-hcdtc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4cbz9\" (UID: \"5674fd74-f83f-4fff-8274-02567d473982\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" Feb 23 13:25:32.008791 master-0 kubenswrapper[26474]: I0223 13:25:32.008112 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb"] Feb 23 13:25:32.028781 master-0 kubenswrapper[26474]: I0223 13:25:32.028697 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:32.030459 master-0 kubenswrapper[26474]: E0223 13:25:32.030316 26474 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:32.030560 master-0 kubenswrapper[26474]: E0223 13:25:32.030500 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert podName:e583a795-8ab8-4cb3-ab87-770e147b4fcd nodeName:}" failed. No retries permitted until 2026-02-23 13:25:33.030467278 +0000 UTC m=+654.876975125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" (UID: "e583a795-8ab8-4cb3-ab87-770e147b4fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:32.083183 master-0 kubenswrapper[26474]: I0223 13:25:32.083137 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" Feb 23 13:25:32.208837 master-0 kubenswrapper[26474]: I0223 13:25:32.208783 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs"] Feb 23 13:25:32.228351 master-0 kubenswrapper[26474]: I0223 13:25:32.228291 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2"] Feb 23 13:25:32.244904 master-0 kubenswrapper[26474]: I0223 13:25:32.244371 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5"] Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: I0223 13:25:32.356410 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: I0223 13:25:32.356475 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: E0223 13:25:32.356692 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: E0223 13:25:32.356748 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:33.356732912 +0000 UTC m=+655.203240579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: E0223 13:25:32.357120 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:32.357191 master-0 kubenswrapper[26474]: E0223 13:25:32.357148 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:33.357140951 +0000 UTC m=+655.203648628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:32.530927 master-0 kubenswrapper[26474]: I0223 13:25:32.530734 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t"] Feb 23 13:25:32.538272 master-0 kubenswrapper[26474]: I0223 13:25:32.538207 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb"] Feb 23 13:25:32.555922 master-0 kubenswrapper[26474]: I0223 13:25:32.555729 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84"] Feb 23 13:25:32.574461 master-0 kubenswrapper[26474]: I0223 13:25:32.574032 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6"] Feb 23 13:25:32.682467 master-0 kubenswrapper[26474]: I0223 13:25:32.679969 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:32.682467 master-0 kubenswrapper[26474]: E0223 13:25:32.680178 26474 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:32.682724 master-0 kubenswrapper[26474]: E0223 13:25:32.682651 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert podName:6ad5cdc1-3784-4521-9f4f-e3f3877f8c31 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:34.681638042 +0000 UTC m=+656.528145739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert") pod "infra-operator-controller-manager-5f879c76b6-q5tzm" (UID: "6ad5cdc1-3784-4521-9f4f-e3f3877f8c31") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:32.914302 master-0 kubenswrapper[26474]: I0223 13:25:32.913399 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" event={"ID":"0a34c409-b181-4673-8983-0195a142c22d","Type":"ContainerStarted","Data":"d600167e69048ff6d5ab0e385ccb32f9848def753a9088418a321645830b93f1"} Feb 23 13:25:32.934783 master-0 kubenswrapper[26474]: I0223 13:25:32.934663 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" event={"ID":"8904511a-508f-4a31-a4b0-15cb665eeb6d","Type":"ContainerStarted","Data":"20554444067e1c79cb2b72ddb111bde3f2a6ade65e804c5cf0c03daaac670e5e"} Feb 23 13:25:32.959422 master-0 kubenswrapper[26474]: I0223 13:25:32.956719 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" event={"ID":"2e3f02c0-ac53-4ed1-a735-c46648724b7c","Type":"ContainerStarted","Data":"05622faba58985762cfa29b8b9dd98f270b528ab13d0d5634cbd67666450f073"} Feb 23 13:25:32.967143 master-0 kubenswrapper[26474]: I0223 13:25:32.967078 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" event={"ID":"68e3868d-86e1-4564-84ec-e290e9ac1aa7","Type":"ContainerStarted","Data":"16335351d03b813ed503b1a2157e982f49d478ff49aee4fcb9d5341b2e95ad03"} Feb 23 13:25:32.994485 master-0 kubenswrapper[26474]: I0223 13:25:32.970162 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" event={"ID":"6dc3991b-f1a6-436f-aa4c-19d6e5e1d376","Type":"ContainerStarted","Data":"e0fff69badbecf41bc7847ce7eeeec56984b1eddc6d03f5959216a6ce58b2bc0"} Feb 23 13:25:32.994485 master-0 kubenswrapper[26474]: I0223 13:25:32.972714 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" event={"ID":"6bcc1ece-0c36-4593-9134-31adc6f5b6e3","Type":"ContainerStarted","Data":"96a07bc1b2b8b06abff8cb1b65d3801367ebc22fb568e7e86961f427d0b181f0"} Feb 23 13:25:32.994485 master-0 kubenswrapper[26474]: I0223 13:25:32.985820 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6"] Feb 23 13:25:33.010334 master-0 kubenswrapper[26474]: I0223 13:25:33.010264 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" event={"ID":"130ccb2a-cc03-4cb7-a439-fc8769a64b63","Type":"ContainerStarted","Data":"5a0ace043584c565f40ae41c90b03c24de8ccafc1b2fc8e485ac4567561a22d7"} Feb 23 13:25:33.038026 master-0 kubenswrapper[26474]: I0223 13:25:33.035770 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb"] Feb 23 13:25:33.047043 master-0 kubenswrapper[26474]: I0223 13:25:33.046932 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv"] Feb 23 13:25:33.083455 master-0 kubenswrapper[26474]: I0223 13:25:33.075114 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff"] Feb 23 13:25:33.093760 master-0 kubenswrapper[26474]: I0223 13:25:33.092400 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b474t"] Feb 23 13:25:33.096180 master-0 kubenswrapper[26474]: I0223 13:25:33.096091 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:33.096740 master-0 kubenswrapper[26474]: E0223 13:25:33.096544 26474 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:33.096740 master-0 kubenswrapper[26474]: E0223 13:25:33.096651 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert podName:e583a795-8ab8-4cb3-ab87-770e147b4fcd nodeName:}" failed. No retries permitted until 2026-02-23 13:25:35.096617762 +0000 UTC m=+656.943125439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" (UID: "e583a795-8ab8-4cb3-ab87-770e147b4fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:33.201556 master-0 kubenswrapper[26474]: I0223 13:25:33.201232 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67"] Feb 23 13:25:33.202058 master-0 kubenswrapper[26474]: W0223 13:25:33.202017 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2522d9b4_e710_4c6c_babe_50b15608f82f.slice/crio-48dc72241ddf9d206829b330e13e2591bd502a60d92bc9dc1501f08e1563dd83 WatchSource:0}: Error finding container 48dc72241ddf9d206829b330e13e2591bd502a60d92bc9dc1501f08e1563dd83: Status 404 returned error can't find the container with id 48dc72241ddf9d206829b330e13e2591bd502a60d92bc9dc1501f08e1563dd83 Feb 23 13:25:33.216358 master-0 kubenswrapper[26474]: I0223 13:25:33.216276 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl"] Feb 23 13:25:33.232479 master-0 kubenswrapper[26474]: I0223 13:25:33.232318 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7"] Feb 23 13:25:33.290504 master-0 kubenswrapper[26474]: I0223 13:25:33.242385 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn"] Feb 23 13:25:33.291191 master-0 kubenswrapper[26474]: W0223 13:25:33.291120 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b3fd8f4_0323_4cf3_a20c_94ffd694f226.slice/crio-4e89e9c60b4061e46449d450123e941cc79721929c836871ff7248b1164d8428 WatchSource:0}: Error finding container 4e89e9c60b4061e46449d450123e941cc79721929c836871ff7248b1164d8428: Status 404 returned error can't find the container with id 4e89e9c60b4061e46449d450123e941cc79721929c836871ff7248b1164d8428 Feb 23 13:25:33.293618 master-0 kubenswrapper[26474]: E0223 13:25:33.293508 26474 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twrkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-rbgvn_openstack-operators(1b3fd8f4-0323-4cf3-a20c-94ffd694f226): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:25:33.295037 master-0 kubenswrapper[26474]: E0223 13:25:33.294923 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" podUID="1b3fd8f4-0323-4cf3-a20c-94ffd694f226" Feb 23 13:25:33.419469 master-0 kubenswrapper[26474]: I0223 13:25:33.419356 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:33.419975 master-0 kubenswrapper[26474]: E0223 13:25:33.419459 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:33.419975 master-0 kubenswrapper[26474]: E0223 13:25:33.419556 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:35.419528075 +0000 UTC m=+657.266035752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:33.419975 master-0 kubenswrapper[26474]: I0223 13:25:33.419486 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:33.419975 master-0 kubenswrapper[26474]: E0223 13:25:33.419651 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:33.419975 master-0 kubenswrapper[26474]: E0223 13:25:33.419761 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:35.41973942 +0000 UTC m=+657.266247097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:33.564120 master-0 kubenswrapper[26474]: I0223 13:25:33.563916 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9"] Feb 23 13:25:34.037055 master-0 kubenswrapper[26474]: I0223 13:25:34.033929 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" event={"ID":"50c65748-710a-45bf-b87c-da417b425d24","Type":"ContainerStarted","Data":"1147ab62a35ba7865c560eb7a307e6e87a473ba272cd8a8c3608eaf870ba6d62"} Feb 23 13:25:34.039189 master-0 kubenswrapper[26474]: I0223 13:25:34.039125 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" event={"ID":"a98204b6-f613-417e-a067-f418edda4d8f","Type":"ContainerStarted","Data":"e2d4fc5a51dd1dfb227f3307936779b89ce74fea3d651421853a4ade95b63a39"} Feb 23 13:25:34.052496 master-0 kubenswrapper[26474]: I0223 13:25:34.052411 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" event={"ID":"51fe69f3-87d3-4318-87af-cb2cc650c102","Type":"ContainerStarted","Data":"d2180ef8afb691f30f7cb0c0de5bc762ca60e465257426836650d743edec62f7"} Feb 23 13:25:34.065665 master-0 kubenswrapper[26474]: I0223 13:25:34.065473 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" event={"ID":"1b3fd8f4-0323-4cf3-a20c-94ffd694f226","Type":"ContainerStarted","Data":"4e89e9c60b4061e46449d450123e941cc79721929c836871ff7248b1164d8428"} Feb 23 13:25:34.076279 master-0 kubenswrapper[26474]: I0223 13:25:34.075525 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" event={"ID":"81d4eac4-5e91-43b6-9dcd-485fd51b32da","Type":"ContainerStarted","Data":"9a38661422a6b371361ad1a7905b8385961e336b368bff783f6b79b427cbc2e6"} Feb 23 13:25:34.078950 master-0 kubenswrapper[26474]: E0223 13:25:34.078891 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" podUID="1b3fd8f4-0323-4cf3-a20c-94ffd694f226" Feb 23 13:25:34.081797 master-0 kubenswrapper[26474]: I0223 13:25:34.081725 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" event={"ID":"bce90f86-6940-4838-8b91-09eccef0ada1","Type":"ContainerStarted","Data":"a5ef9232294cdb5615a8ad92eb670455805ba57217edd8d054e56cba8caa1030"} Feb 23 13:25:34.088547 master-0 kubenswrapper[26474]: I0223 13:25:34.083980 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" event={"ID":"2522d9b4-e710-4c6c-babe-50b15608f82f","Type":"ContainerStarted","Data":"48dc72241ddf9d206829b330e13e2591bd502a60d92bc9dc1501f08e1563dd83"} Feb 23 13:25:34.088547 master-0 kubenswrapper[26474]: I0223 13:25:34.087036 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" event={"ID":"8f0efba8-d92e-48b3-affc-1f155052edeb","Type":"ContainerStarted","Data":"d21b957908640abc1d5ce6bd5b36829ad40dadc77a6bfccf36a7528c18d78120"} Feb 23 13:25:34.089300 master-0 kubenswrapper[26474]: I0223 13:25:34.089198 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" event={"ID":"5674fd74-f83f-4fff-8274-02567d473982","Type":"ContainerStarted","Data":"d869c36df9d0db09d79f079b1ec32b28e0b5de9517ed6cd8f873ea27e66631d5"} Feb 23 13:25:34.096746 master-0 kubenswrapper[26474]: I0223 13:25:34.096680 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" event={"ID":"62f3d897-cddf-4020-ac9e-fe028bf95c21","Type":"ContainerStarted","Data":"1e0c4a1081ab577a9c46efa2e8b4e06dbeb1004a91d5af254f1e160b41d38df3"} Feb 23 13:25:34.706573 master-0 kubenswrapper[26474]: I0223 13:25:34.706307 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:34.707295 master-0 kubenswrapper[26474]: E0223 13:25:34.706552 26474 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:34.707295 master-0 kubenswrapper[26474]: E0223 13:25:34.706682 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert podName:6ad5cdc1-3784-4521-9f4f-e3f3877f8c31 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:38.706652726 +0000 UTC m=+660.553160583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert") pod "infra-operator-controller-manager-5f879c76b6-q5tzm" (UID: "6ad5cdc1-3784-4521-9f4f-e3f3877f8c31") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:35.116910 master-0 kubenswrapper[26474]: I0223 13:25:35.114899 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:35.116910 master-0 kubenswrapper[26474]: E0223 13:25:35.115143 26474 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:35.116910 master-0 kubenswrapper[26474]: E0223 13:25:35.115238 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert podName:e583a795-8ab8-4cb3-ab87-770e147b4fcd nodeName:}" failed. No retries permitted until 2026-02-23 13:25:39.115216861 +0000 UTC m=+660.961724528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" (UID: "e583a795-8ab8-4cb3-ab87-770e147b4fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:35.116910 master-0 kubenswrapper[26474]: E0223 13:25:35.116853 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" podUID="1b3fd8f4-0323-4cf3-a20c-94ffd694f226" Feb 23 13:25:35.422489 master-0 kubenswrapper[26474]: I0223 13:25:35.421907 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:35.422489 master-0 kubenswrapper[26474]: E0223 13:25:35.422175 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:35.422489 master-0 kubenswrapper[26474]: I0223 13:25:35.422224 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:35.422489 master-0 kubenswrapper[26474]: E0223 13:25:35.422298 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:39.422242549 +0000 UTC m=+661.268750226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:35.422802 master-0 kubenswrapper[26474]: E0223 13:25:35.422614 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:35.422802 master-0 kubenswrapper[26474]: E0223 13:25:35.422674 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:39.42265761 +0000 UTC m=+661.269165467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:38.712621 master-0 kubenswrapper[26474]: I0223 13:25:38.711484 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:38.712621 master-0 kubenswrapper[26474]: E0223 13:25:38.711682 26474 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:38.712621 master-0 kubenswrapper[26474]: E0223 13:25:38.711732 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert podName:6ad5cdc1-3784-4521-9f4f-e3f3877f8c31 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:46.711714107 +0000 UTC m=+668.558221794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert") pod "infra-operator-controller-manager-5f879c76b6-q5tzm" (UID: "6ad5cdc1-3784-4521-9f4f-e3f3877f8c31") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:25:39.123432 master-0 kubenswrapper[26474]: I0223 13:25:39.123274 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:39.124740 master-0 kubenswrapper[26474]: E0223 13:25:39.123493 26474 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:39.124740 master-0 kubenswrapper[26474]: E0223 13:25:39.123581 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert podName:e583a795-8ab8-4cb3-ab87-770e147b4fcd nodeName:}" failed. No retries permitted until 2026-02-23 13:25:47.123558071 +0000 UTC m=+668.970065748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" (UID: "e583a795-8ab8-4cb3-ab87-770e147b4fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: I0223 13:25:39.429300 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: E0223 13:25:39.429524 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: I0223 13:25:39.429568 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: E0223 13:25:39.429635 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:47.429611785 +0000 UTC m=+669.276119632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: E0223 13:25:39.429778 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:39.430756 master-0 kubenswrapper[26474]: E0223 13:25:39.429846 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:25:47.429824611 +0000 UTC m=+669.276332448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:46.722985 master-0 kubenswrapper[26474]: I0223 13:25:46.722887 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:46.726270 master-0 kubenswrapper[26474]: I0223 13:25:46.726222 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ad5cdc1-3784-4521-9f4f-e3f3877f8c31-cert\") pod \"infra-operator-controller-manager-5f879c76b6-q5tzm\" (UID: \"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:46.869440 master-0 kubenswrapper[26474]: I0223 13:25:46.869309 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:47.132015 master-0 kubenswrapper[26474]: I0223 13:25:47.131929 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:47.144003 master-0 kubenswrapper[26474]: I0223 13:25:47.143946 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e583a795-8ab8-4cb3-ab87-770e147b4fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9jtxq6\" (UID: \"e583a795-8ab8-4cb3-ab87-770e147b4fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:47.218017 master-0 kubenswrapper[26474]: I0223 13:25:47.217920 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:47.437790 master-0 kubenswrapper[26474]: I0223 13:25:47.437640 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:47.437790 master-0 kubenswrapper[26474]: I0223 13:25:47.437719 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:25:47.438096 master-0 kubenswrapper[26474]: E0223 13:25:47.437885 26474 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:25:47.438096 master-0 kubenswrapper[26474]: E0223 13:25:47.437966 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:26:03.437945813 +0000 UTC m=+685.284453560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "metrics-server-cert" not found Feb 23 13:25:47.438997 master-0 kubenswrapper[26474]: E0223 13:25:47.437895 26474 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:25:47.439068 master-0 kubenswrapper[26474]: E0223 13:25:47.439022 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs podName:340bb764-ee68-42e8-81da-a6eb1790da92 nodeName:}" failed. No retries permitted until 2026-02-23 13:26:03.438999688 +0000 UTC m=+685.285507365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-75wjb" (UID: "340bb764-ee68-42e8-81da-a6eb1790da92") : secret "webhook-server-cert" not found Feb 23 13:25:49.397376 master-0 kubenswrapper[26474]: I0223 13:25:49.396413 26474 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:25:51.746751 master-0 kubenswrapper[26474]: I0223 13:25:51.743186 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm"] Feb 23 13:25:51.754392 master-0 kubenswrapper[26474]: I0223 13:25:51.751200 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6"] Feb 23 13:25:52.359866 master-0 kubenswrapper[26474]: I0223 13:25:52.359790 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" event={"ID":"8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4","Type":"ContainerStarted","Data":"28e21be85e7b35d9b8da18ee51d5fae4bbd7165f83fcf34b46fa3d5b56324be8"} Feb 23 13:25:52.361064 master-0 kubenswrapper[26474]: I0223 13:25:52.361021 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:25:52.363313 master-0 kubenswrapper[26474]: I0223 13:25:52.363256 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" event={"ID":"50c65748-710a-45bf-b87c-da417b425d24","Type":"ContainerStarted","Data":"fdf11813799f5d2a125c2725db118f99e55bb210bc528866f97604e102a8bf60"} Feb 23 13:25:52.364712 master-0 kubenswrapper[26474]: I0223 13:25:52.364356 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:25:52.374417 master-0 kubenswrapper[26474]: I0223 13:25:52.374317 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" event={"ID":"13a0dd7d-3565-428b-b1f6-75a3956c808b","Type":"ContainerStarted","Data":"cd5dd61e6e241cc8fbd42de8abaa422a13629cc2cc96ae3fb4bacc3754afac59"} Feb 23 13:25:52.374673 master-0 kubenswrapper[26474]: I0223 13:25:52.374558 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:25:52.379170 master-0 kubenswrapper[26474]: I0223 13:25:52.378681 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" event={"ID":"6dc3991b-f1a6-436f-aa4c-19d6e5e1d376","Type":"ContainerStarted","Data":"50a36c975b3141be146bcf19680be5ce0813af74ef38ea2ee032e6649355f50f"} Feb 23 13:25:52.379170 master-0 kubenswrapper[26474]: I0223 13:25:52.378836 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:25:52.388080 master-0 kubenswrapper[26474]: I0223 13:25:52.387965 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" event={"ID":"a98204b6-f613-417e-a067-f418edda4d8f","Type":"ContainerStarted","Data":"325910bd0eb8dd2df723845e866c44bb501007158e1237c7e33336dac6c90fa7"} Feb 23 13:25:52.389244 master-0 kubenswrapper[26474]: I0223 13:25:52.389191 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:25:52.399792 master-0 kubenswrapper[26474]: I0223 13:25:52.399727 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:25:52.401926 master-0 kubenswrapper[26474]: I0223 13:25:52.401020 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" podStartSLOduration=2.614515228 podStartE2EDuration="22.400996221s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:31.433129815 +0000 UTC m=+653.279637492" lastFinishedPulling="2026-02-23 13:25:51.219610808 +0000 UTC m=+673.066118485" observedRunningTime="2026-02-23 13:25:52.398728086 +0000 UTC m=+674.245235763" watchObservedRunningTime="2026-02-23 13:25:52.400996221 +0000 UTC m=+674.247503908" Feb 23 13:25:52.423723 master-0 kubenswrapper[26474]: I0223 13:25:52.423635 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" event={"ID":"62f3d897-cddf-4020-ac9e-fe028bf95c21","Type":"ContainerStarted","Data":"beaeee7c88a706506f012657f469914d52cbae48cf525c7977698b0a9363a4dc"} Feb 23 13:25:52.423723 master-0 kubenswrapper[26474]: I0223 13:25:52.423697 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" event={"ID":"2e3f02c0-ac53-4ed1-a735-c46648724b7c","Type":"ContainerStarted","Data":"764c51e86cda3fd8983904203600e2fd7fc4576b3ce702a3d2cb108a690ea86c"} Feb 23 13:25:52.423723 master-0 kubenswrapper[26474]: I0223 13:25:52.423727 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:25:52.423723 master-0 kubenswrapper[26474]: I0223 13:25:52.423739 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:25:52.424085 master-0 kubenswrapper[26474]: I0223 13:25:52.423748 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" event={"ID":"81d4eac4-5e91-43b6-9dcd-485fd51b32da","Type":"ContainerStarted","Data":"8fb6161d4330a6f2372162056e1d67e456e11564b7fe05ad53db093679b0d025"} Feb 23 13:25:52.431506 master-0 kubenswrapper[26474]: I0223 13:25:52.429575 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" event={"ID":"130ccb2a-cc03-4cb7-a439-fc8769a64b63","Type":"ContainerStarted","Data":"b7f1fd08d9e83e3fe785c42ce236bd50a2ad6950f55c09c0dc060de3c2be5ad2"} Feb 23 13:25:52.431506 master-0 kubenswrapper[26474]: I0223 13:25:52.430616 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:25:52.446851 master-0 kubenswrapper[26474]: I0223 13:25:52.446722 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" event={"ID":"0a34c409-b181-4673-8983-0195a142c22d","Type":"ContainerStarted","Data":"dc1dda2846c593019b97f73f7c49042cb8fb4410a8c6469a69278fd44ed0ce13"} Feb 23 13:25:52.448394 master-0 kubenswrapper[26474]: I0223 13:25:52.447837 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:25:52.449154 master-0 kubenswrapper[26474]: I0223 13:25:52.449092 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" event={"ID":"68e3868d-86e1-4564-84ec-e290e9ac1aa7","Type":"ContainerStarted","Data":"74448d7b157a784dd6b3575568c6baec1108de1498b02c7870aa8c1589a015f8"} Feb 23 13:25:52.451446 master-0 kubenswrapper[26474]: I0223 13:25:52.451275 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:25:52.462644 master-0 kubenswrapper[26474]: I0223 13:25:52.462587 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" event={"ID":"e583a795-8ab8-4cb3-ab87-770e147b4fcd","Type":"ContainerStarted","Data":"9d8acf0e8152cc232c1684528c1be1f6a333c36457406cc7e450f7d92f629660"} Feb 23 13:25:52.468439 master-0 kubenswrapper[26474]: I0223 13:25:52.468254 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" event={"ID":"22f5345b-e9af-4977-b60c-bd8c78e5dbf5","Type":"ContainerStarted","Data":"2eca93f8e981b1f6dcee5153579bbca33a3a3c11fee9f0c8c8a843e031378000"} Feb 23 13:25:52.470746 master-0 kubenswrapper[26474]: I0223 13:25:52.469185 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:25:52.471000 master-0 kubenswrapper[26474]: I0223 13:25:52.470930 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" podStartSLOduration=4.446753249 podStartE2EDuration="22.470909493s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.23539015 +0000 UTC m=+655.081897827" lastFinishedPulling="2026-02-23 13:25:51.259546394 +0000 UTC m=+673.106054071" observedRunningTime="2026-02-23 13:25:52.451558554 +0000 UTC m=+674.298066251" watchObservedRunningTime="2026-02-23 13:25:52.470909493 +0000 UTC m=+674.317417160" Feb 23 13:25:52.484713 master-0 kubenswrapper[26474]: I0223 13:25:52.483650 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" event={"ID":"6bcc1ece-0c36-4593-9134-31adc6f5b6e3","Type":"ContainerStarted","Data":"c0e70a4648cf21304dccb090aff4a3a4ace9e6e72e89d26a79c6fbe792ff96ec"} Feb 23 13:25:52.484713 master-0 kubenswrapper[26474]: I0223 13:25:52.484537 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:25:52.507702 master-0 kubenswrapper[26474]: I0223 13:25:52.505640 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" event={"ID":"8904511a-508f-4a31-a4b0-15cb665eeb6d","Type":"ContainerStarted","Data":"b5a9d8f85f4019096259a5eb2760f70885f3fb51fb9f8073fd548c955c84e3a8"} Feb 23 13:25:52.507702 master-0 kubenswrapper[26474]: I0223 13:25:52.506587 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:25:52.524632 master-0 kubenswrapper[26474]: I0223 13:25:52.524361 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" event={"ID":"2522d9b4-e710-4c6c-babe-50b15608f82f","Type":"ContainerStarted","Data":"500421b040ab351068cbfb02d31f5a7669402c7c943cadeefcc5413ca46f9973"} Feb 23 13:25:52.524632 master-0 kubenswrapper[26474]: I0223 13:25:52.524609 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:25:52.534136 master-0 kubenswrapper[26474]: I0223 13:25:52.533731 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" event={"ID":"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31","Type":"ContainerStarted","Data":"7b08248071ef86f7c2d0d3cbf4beb6343a5191ecc9f368bc0f9e288a08834da5"} Feb 23 13:25:52.546504 master-0 kubenswrapper[26474]: I0223 13:25:52.544863 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" event={"ID":"51fe69f3-87d3-4318-87af-cb2cc650c102","Type":"ContainerStarted","Data":"1ba7f98cf0c94fe0da6790b29e20cccdb892f97d3cc79a64f6234ca399b2e661"} Feb 23 13:25:52.546504 master-0 kubenswrapper[26474]: I0223 13:25:52.546361 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:25:52.579252 master-0 kubenswrapper[26474]: I0223 13:25:52.579108 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" podStartSLOduration=3.6822504719999998 podStartE2EDuration="22.57909043s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:31.811786727 +0000 UTC m=+653.658294404" lastFinishedPulling="2026-02-23 13:25:50.708626685 +0000 UTC m=+672.555134362" observedRunningTime="2026-02-23 13:25:52.51461696 +0000 UTC m=+674.361124637" watchObservedRunningTime="2026-02-23 13:25:52.57909043 +0000 UTC m=+674.425598107" Feb 23 13:25:52.582520 master-0 kubenswrapper[26474]: I0223 13:25:52.582456 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" podStartSLOduration=3.575515699 podStartE2EDuration="22.582442151s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.212503692 +0000 UTC m=+654.059011359" lastFinishedPulling="2026-02-23 13:25:51.219430134 +0000 UTC m=+673.065937811" observedRunningTime="2026-02-23 13:25:52.579489659 +0000 UTC m=+674.425997336" watchObservedRunningTime="2026-02-23 13:25:52.582442151 +0000 UTC m=+674.428949828" Feb 23 13:25:52.684732 master-0 kubenswrapper[26474]: I0223 13:25:52.681960 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" podStartSLOduration=4.627945982 podStartE2EDuration="22.681936968s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.208152271 +0000 UTC m=+655.054659948" lastFinishedPulling="2026-02-23 13:25:51.262143257 +0000 UTC m=+673.108650934" observedRunningTime="2026-02-23 13:25:52.626761913 +0000 UTC m=+674.473269590" watchObservedRunningTime="2026-02-23 13:25:52.681936968 +0000 UTC m=+674.528444645" Feb 23 13:25:52.684732 master-0 kubenswrapper[26474]: I0223 13:25:52.682889 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" podStartSLOduration=4.438077188 podStartE2EDuration="22.682880381s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.982712447 +0000 UTC m=+654.829220124" lastFinishedPulling="2026-02-23 13:25:51.22751563 +0000 UTC m=+673.074023317" observedRunningTime="2026-02-23 13:25:52.663592994 +0000 UTC m=+674.510100671" watchObservedRunningTime="2026-02-23 13:25:52.682880381 +0000 UTC m=+674.529388058" Feb 23 13:25:52.751525 master-0 kubenswrapper[26474]: I0223 13:25:52.749086 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" podStartSLOduration=4.163196167 podStartE2EDuration="22.749061262s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.568937425 +0000 UTC m=+654.415445102" lastFinishedPulling="2026-02-23 13:25:51.15480251 +0000 UTC m=+673.001310197" observedRunningTime="2026-02-23 13:25:52.724884197 +0000 UTC m=+674.571391874" watchObservedRunningTime="2026-02-23 13:25:52.749061262 +0000 UTC m=+674.595568939" Feb 23 13:25:52.769667 master-0 kubenswrapper[26474]: I0223 13:25:52.769493 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" podStartSLOduration=3.779162407 podStartE2EDuration="22.769465716s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.229069453 +0000 UTC m=+654.075577130" lastFinishedPulling="2026-02-23 13:25:51.219372762 +0000 UTC m=+673.065880439" observedRunningTime="2026-02-23 13:25:52.758824928 +0000 UTC m=+674.605332605" watchObservedRunningTime="2026-02-23 13:25:52.769465716 +0000 UTC m=+674.615973393" Feb 23 13:25:52.813394 master-0 kubenswrapper[26474]: I0223 13:25:52.802872 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" podStartSLOduration=4.149203599 podStartE2EDuration="22.802850984s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.565624825 +0000 UTC m=+654.412132512" lastFinishedPulling="2026-02-23 13:25:51.21927222 +0000 UTC m=+673.065779897" observedRunningTime="2026-02-23 13:25:52.797645878 +0000 UTC m=+674.644153555" watchObservedRunningTime="2026-02-23 13:25:52.802850984 +0000 UTC m=+674.649358661" Feb 23 13:25:52.848686 master-0 kubenswrapper[26474]: I0223 13:25:52.848571 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" podStartSLOduration=3.891089595 podStartE2EDuration="22.848541589s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.222151666 +0000 UTC m=+654.068659343" lastFinishedPulling="2026-02-23 13:25:51.17960366 +0000 UTC m=+673.026111337" observedRunningTime="2026-02-23 13:25:52.838715281 +0000 UTC m=+674.685222948" watchObservedRunningTime="2026-02-23 13:25:52.848541589 +0000 UTC m=+674.695049266" Feb 23 13:25:52.871478 master-0 kubenswrapper[26474]: I0223 13:25:52.871382 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" podStartSLOduration=4.232309801 podStartE2EDuration="22.871360272s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.579198634 +0000 UTC m=+654.425706311" lastFinishedPulling="2026-02-23 13:25:51.218249105 +0000 UTC m=+673.064756782" observedRunningTime="2026-02-23 13:25:52.859896623 +0000 UTC m=+674.706404300" watchObservedRunningTime="2026-02-23 13:25:52.871360272 +0000 UTC m=+674.717867939" Feb 23 13:25:52.891362 master-0 kubenswrapper[26474]: I0223 13:25:52.887531 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" podStartSLOduration=4.236896891 podStartE2EDuration="22.887509242s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.568878894 +0000 UTC m=+654.415386571" lastFinishedPulling="2026-02-23 13:25:51.219491245 +0000 UTC m=+673.065998922" observedRunningTime="2026-02-23 13:25:52.880697047 +0000 UTC m=+674.727204744" watchObservedRunningTime="2026-02-23 13:25:52.887509242 +0000 UTC m=+674.734016919" Feb 23 13:25:52.916362 master-0 kubenswrapper[26474]: I0223 13:25:52.916021 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" podStartSLOduration=4.646167363 podStartE2EDuration="22.916003062s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.985669348 +0000 UTC m=+654.832177025" lastFinishedPulling="2026-02-23 13:25:51.255505027 +0000 UTC m=+673.102012724" observedRunningTime="2026-02-23 13:25:52.910745774 +0000 UTC m=+674.757253451" watchObservedRunningTime="2026-02-23 13:25:52.916003062 +0000 UTC m=+674.762510739" Feb 23 13:25:52.945433 master-0 kubenswrapper[26474]: I0223 13:25:52.945254 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" podStartSLOduration=4.731427814 podStartE2EDuration="22.945237198s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.004503833 +0000 UTC m=+654.851011510" lastFinishedPulling="2026-02-23 13:25:51.218313217 +0000 UTC m=+673.064820894" observedRunningTime="2026-02-23 13:25:52.942108963 +0000 UTC m=+674.788616640" watchObservedRunningTime="2026-02-23 13:25:52.945237198 +0000 UTC m=+674.791744875" Feb 23 13:25:52.971270 master-0 kubenswrapper[26474]: I0223 13:25:52.971185 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" podStartSLOduration=3.347289847 podStartE2EDuration="22.971164126s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:31.595629927 +0000 UTC m=+653.442137604" lastFinishedPulling="2026-02-23 13:25:51.219504196 +0000 UTC m=+673.066011883" observedRunningTime="2026-02-23 13:25:52.96761293 +0000 UTC m=+674.814120617" watchObservedRunningTime="2026-02-23 13:25:52.971164126 +0000 UTC m=+674.817671803" Feb 23 13:25:52.996367 master-0 kubenswrapper[26474]: I0223 13:25:52.986998 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" podStartSLOduration=5.032791566 podStartE2EDuration="22.986980798s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.225685535 +0000 UTC m=+655.072193212" lastFinishedPulling="2026-02-23 13:25:51.179874767 +0000 UTC m=+673.026382444" observedRunningTime="2026-02-23 13:25:52.98660981 +0000 UTC m=+674.833117487" watchObservedRunningTime="2026-02-23 13:25:52.986980798 +0000 UTC m=+674.833488475" Feb 23 13:25:54.566610 master-0 kubenswrapper[26474]: I0223 13:25:54.566532 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" event={"ID":"1b3fd8f4-0323-4cf3-a20c-94ffd694f226","Type":"ContainerStarted","Data":"4b0bb62b13083cd43030b2eca7c5c0aab91c082068a8d103e52b7b8a2f089ee9"} Feb 23 13:25:54.567151 master-0 kubenswrapper[26474]: I0223 13:25:54.566866 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:25:54.569044 master-0 kubenswrapper[26474]: I0223 13:25:54.568887 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" event={"ID":"bce90f86-6940-4838-8b91-09eccef0ada1","Type":"ContainerStarted","Data":"485800b02abcc12266450f46ea74dbf7c06f35460cd580cbda052276a2220e76"} Feb 23 13:25:54.569330 master-0 kubenswrapper[26474]: I0223 13:25:54.569289 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:25:54.571059 master-0 kubenswrapper[26474]: I0223 13:25:54.571007 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" event={"ID":"8f0efba8-d92e-48b3-affc-1f155052edeb","Type":"ContainerStarted","Data":"2e41764fb5e96ace180d42bdc41c16e1c52e880715074a82c16974c8f9cc6f1b"} Feb 23 13:25:54.571170 master-0 kubenswrapper[26474]: I0223 13:25:54.571142 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:25:54.573039 master-0 kubenswrapper[26474]: I0223 13:25:54.572998 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" event={"ID":"5674fd74-f83f-4fff-8274-02567d473982","Type":"ContainerStarted","Data":"6edc655c3237b3d5c72a3c3f1289f48151ad6985c11f8f0c1baa805824a27aa7"} Feb 23 13:25:54.614745 master-0 kubenswrapper[26474]: I0223 13:25:54.610317 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" podStartSLOduration=3.8789066610000003 podStartE2EDuration="24.610290584s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.2932437 +0000 UTC m=+655.139751387" lastFinishedPulling="2026-02-23 13:25:54.024627633 +0000 UTC m=+675.871135310" observedRunningTime="2026-02-23 13:25:54.606686816 +0000 UTC m=+676.453194513" watchObservedRunningTime="2026-02-23 13:25:54.610290584 +0000 UTC m=+676.456798261" Feb 23 13:25:54.653530 master-0 kubenswrapper[26474]: I0223 13:25:54.653436 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" podStartSLOduration=6.362623121 podStartE2EDuration="24.653407416s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:32.983010544 +0000 UTC m=+654.829518221" lastFinishedPulling="2026-02-23 13:25:51.273794839 +0000 UTC m=+673.120302516" observedRunningTime="2026-02-23 13:25:54.6332829 +0000 UTC m=+676.479790587" watchObservedRunningTime="2026-02-23 13:25:54.653407416 +0000 UTC m=+676.499915093" Feb 23 13:25:54.681388 master-0 kubenswrapper[26474]: I0223 13:25:54.677051 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" podStartSLOduration=6.4179572799999995 podStartE2EDuration="24.677021388s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.004178116 +0000 UTC m=+654.850685793" lastFinishedPulling="2026-02-23 13:25:51.263242214 +0000 UTC m=+673.109749901" observedRunningTime="2026-02-23 13:25:54.651570792 +0000 UTC m=+676.498078469" watchObservedRunningTime="2026-02-23 13:25:54.677021388 +0000 UTC m=+676.523529075" Feb 23 13:25:54.718842 master-0 kubenswrapper[26474]: I0223 13:25:54.718739 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4cbz9" podStartSLOduration=6.328583783 podStartE2EDuration="23.718709347s" podCreationTimestamp="2026-02-23 13:25:31 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.899260012 +0000 UTC m=+655.745767689" lastFinishedPulling="2026-02-23 13:25:51.289385576 +0000 UTC m=+673.135893253" observedRunningTime="2026-02-23 13:25:54.670426259 +0000 UTC m=+676.516933946" watchObservedRunningTime="2026-02-23 13:25:54.718709347 +0000 UTC m=+676.565217024" Feb 23 13:25:57.607180 master-0 kubenswrapper[26474]: I0223 13:25:57.607087 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" event={"ID":"e583a795-8ab8-4cb3-ab87-770e147b4fcd","Type":"ContainerStarted","Data":"8e0494b130f71e37dde79f07567ada2164226b23b2e15b31e9c337ea36b8a221"} Feb 23 13:25:57.608141 master-0 kubenswrapper[26474]: I0223 13:25:57.607243 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:25:57.611450 master-0 kubenswrapper[26474]: I0223 13:25:57.609890 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" event={"ID":"6ad5cdc1-3784-4521-9f4f-e3f3877f8c31","Type":"ContainerStarted","Data":"a499f6c8ee9567112a13f6c09f36efc0efbba9e3a303ce56a6c2e023d0ecd704"} Feb 23 13:25:57.611450 master-0 kubenswrapper[26474]: I0223 13:25:57.610593 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:25:57.848064 master-0 kubenswrapper[26474]: I0223 13:25:57.847963 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" podStartSLOduration=23.256306406 podStartE2EDuration="27.847935827s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:51.830243132 +0000 UTC m=+673.676750809" lastFinishedPulling="2026-02-23 13:25:56.421872543 +0000 UTC m=+678.268380230" observedRunningTime="2026-02-23 13:25:57.835491525 +0000 UTC m=+679.681999232" watchObservedRunningTime="2026-02-23 13:25:57.847935827 +0000 UTC m=+679.694443514" Feb 23 13:25:57.863976 master-0 kubenswrapper[26474]: I0223 13:25:57.863793 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" podStartSLOduration=23.225383186 podStartE2EDuration="27.863766439s" podCreationTimestamp="2026-02-23 13:25:30 +0000 UTC" firstStartedPulling="2026-02-23 13:25:51.778676264 +0000 UTC m=+673.625183941" lastFinishedPulling="2026-02-23 13:25:56.417059517 +0000 UTC m=+678.263567194" observedRunningTime="2026-02-23 13:25:57.861840323 +0000 UTC m=+679.708348030" watchObservedRunningTime="2026-02-23 13:25:57.863766439 +0000 UTC m=+679.710274116" Feb 23 13:26:00.709668 master-0 kubenswrapper[26474]: I0223 13:26:00.709604 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jj5xq" Feb 23 13:26:00.744416 master-0 kubenswrapper[26474]: I0223 13:26:00.744316 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-67nk6" Feb 23 13:26:00.811473 master-0 kubenswrapper[26474]: I0223 13:26:00.807799 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v6gfb" Feb 23 13:26:00.985908 master-0 kubenswrapper[26474]: I0223 13:26:00.984988 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-xbnp5" Feb 23 13:26:01.007494 master-0 kubenswrapper[26474]: I0223 13:26:01.007384 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kn7n2" Feb 23 13:26:01.177938 master-0 kubenswrapper[26474]: I0223 13:26:01.177852 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-fphhs" Feb 23 13:26:01.298891 master-0 kubenswrapper[26474]: I0223 13:26:01.298770 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-h6q84" Feb 23 13:26:01.307791 master-0 kubenswrapper[26474]: I0223 13:26:01.307681 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2tm5t" Feb 23 13:26:01.398275 master-0 kubenswrapper[26474]: I0223 13:26:01.398211 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-g5pv6" Feb 23 13:26:01.435609 master-0 kubenswrapper[26474]: I0223 13:26:01.432379 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-z98lb" Feb 23 13:26:01.483380 master-0 kubenswrapper[26474]: I0223 13:26:01.481535 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wwjlb" Feb 23 13:26:01.507040 master-0 kubenswrapper[26474]: I0223 13:26:01.506983 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-vrkfv" Feb 23 13:26:01.533493 master-0 kubenswrapper[26474]: I0223 13:26:01.532948 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-k85ff" Feb 23 13:26:01.642422 master-0 kubenswrapper[26474]: I0223 13:26:01.642267 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-w2ql6" Feb 23 13:26:01.685020 master-0 kubenswrapper[26474]: I0223 13:26:01.684969 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-22lb7" Feb 23 13:26:01.698577 master-0 kubenswrapper[26474]: I0223 13:26:01.698524 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b474t" Feb 23 13:26:01.718005 master-0 kubenswrapper[26474]: I0223 13:26:01.717948 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-rbgvn" Feb 23 13:26:01.772098 master-0 kubenswrapper[26474]: I0223 13:26:01.772047 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nrq67" Feb 23 13:26:01.803319 master-0 kubenswrapper[26474]: I0223 13:26:01.803258 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vpgcl" Feb 23 13:26:03.520762 master-0 kubenswrapper[26474]: I0223 13:26:03.520624 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:03.520762 master-0 kubenswrapper[26474]: I0223 13:26:03.520728 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:03.524418 master-0 kubenswrapper[26474]: I0223 13:26:03.524376 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:03.526639 master-0 kubenswrapper[26474]: I0223 13:26:03.526575 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/340bb764-ee68-42e8-81da-a6eb1790da92-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-75wjb\" (UID: \"340bb764-ee68-42e8-81da-a6eb1790da92\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:03.540958 master-0 kubenswrapper[26474]: I0223 13:26:03.540902 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:04.551428 master-0 kubenswrapper[26474]: I0223 13:26:04.548049 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb"] Feb 23 13:26:04.679082 master-0 kubenswrapper[26474]: I0223 13:26:04.679015 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" event={"ID":"340bb764-ee68-42e8-81da-a6eb1790da92","Type":"ContainerStarted","Data":"7fb912da82849fe2c5019dc1a90f7b2e60c6417cf3d7ee0200a26c5d1b5f8776"} Feb 23 13:26:05.692150 master-0 kubenswrapper[26474]: I0223 13:26:05.692061 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" event={"ID":"340bb764-ee68-42e8-81da-a6eb1790da92","Type":"ContainerStarted","Data":"3650ebda1b9672dccc942fcde2bf78974f1fb59369ee4da7e3b15735bd70d934"} Feb 23 13:26:05.692866 master-0 kubenswrapper[26474]: I0223 13:26:05.692278 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:05.740282 master-0 kubenswrapper[26474]: I0223 13:26:05.740187 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" podStartSLOduration=34.740163765 podStartE2EDuration="34.740163765s" podCreationTimestamp="2026-02-23 13:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:05.738599247 +0000 UTC m=+687.585106924" watchObservedRunningTime="2026-02-23 13:26:05.740163765 +0000 UTC m=+687.586671442" Feb 23 13:26:06.877326 master-0 kubenswrapper[26474]: I0223 13:26:06.877242 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-q5tzm" Feb 23 13:26:07.229004 master-0 kubenswrapper[26474]: I0223 13:26:07.228847 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9jtxq6" Feb 23 13:26:13.548828 master-0 kubenswrapper[26474]: I0223 13:26:13.548740 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-75wjb" Feb 23 13:26:51.914485 master-0 kubenswrapper[26474]: I0223 13:26:51.914418 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:26:51.917890 master-0 kubenswrapper[26474]: I0223 13:26:51.915946 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:51.924370 master-0 kubenswrapper[26474]: I0223 13:26:51.923843 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 13:26:51.924370 master-0 kubenswrapper[26474]: I0223 13:26:51.924107 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 13:26:51.924370 master-0 kubenswrapper[26474]: I0223 13:26:51.924281 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 13:26:51.949681 master-0 kubenswrapper[26474]: I0223 13:26:51.947700 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:51.949681 master-0 kubenswrapper[26474]: I0223 13:26:51.947807 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tr2\" (UniqueName: \"kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:51.970015 master-0 kubenswrapper[26474]: I0223 13:26:51.969969 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:26:52.014652 master-0 kubenswrapper[26474]: I0223 13:26:52.014603 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:26:52.017741 master-0 kubenswrapper[26474]: I0223 13:26:52.017707 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.020771 master-0 kubenswrapper[26474]: I0223 13:26:52.020699 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 13:26:52.036412 master-0 kubenswrapper[26474]: I0223 13:26:52.036319 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:26:52.048733 master-0 kubenswrapper[26474]: I0223 13:26:52.048685 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwhs\" (UniqueName: \"kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.048870 master-0 kubenswrapper[26474]: I0223 13:26:52.048764 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:52.048870 master-0 kubenswrapper[26474]: I0223 13:26:52.048825 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.048948 master-0 kubenswrapper[26474]: I0223 13:26:52.048887 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.048948 master-0 kubenswrapper[26474]: I0223 13:26:52.048925 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tr2\" (UniqueName: \"kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:52.049504 master-0 kubenswrapper[26474]: I0223 13:26:52.049477 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:52.069049 master-0 kubenswrapper[26474]: I0223 13:26:52.069007 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tr2\" (UniqueName: \"kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2\") pod \"dnsmasq-dns-bc7f9869-qgvzk\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:52.151095 master-0 kubenswrapper[26474]: I0223 13:26:52.151030 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwhs\" (UniqueName: \"kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.151303 master-0 kubenswrapper[26474]: I0223 13:26:52.151245 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.151303 master-0 kubenswrapper[26474]: I0223 13:26:52.151295 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.152400 master-0 kubenswrapper[26474]: I0223 13:26:52.152371 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.153954 master-0 kubenswrapper[26474]: I0223 13:26:52.153644 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.176729 master-0 kubenswrapper[26474]: I0223 13:26:52.176633 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwhs\" (UniqueName: \"kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs\") pod \"dnsmasq-dns-7d4c486879-2ngf5\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.247406 master-0 kubenswrapper[26474]: I0223 13:26:52.247351 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:26:52.356411 master-0 kubenswrapper[26474]: I0223 13:26:52.355665 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:26:52.697554 master-0 kubenswrapper[26474]: I0223 13:26:52.697475 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:26:52.712243 master-0 kubenswrapper[26474]: W0223 13:26:52.712188 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52179181_247c_44c6_8995_b355d526ceda.slice/crio-7f38c4f018c12dbb789d227d200c286356fb6306756fe547e85d7ce1295682e2 WatchSource:0}: Error finding container 7f38c4f018c12dbb789d227d200c286356fb6306756fe547e85d7ce1295682e2: Status 404 returned error can't find the container with id 7f38c4f018c12dbb789d227d200c286356fb6306756fe547e85d7ce1295682e2 Feb 23 13:26:52.816703 master-0 kubenswrapper[26474]: I0223 13:26:52.816651 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:26:52.820920 master-0 kubenswrapper[26474]: W0223 13:26:52.820864 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode49c8de5_6755_4f26_a7d1_160c777d7565.slice/crio-a91d3be97c90f3626a3da4fb8d5d4a2c657194c4b06525c9086e0cbf13a4e879 WatchSource:0}: Error finding container a91d3be97c90f3626a3da4fb8d5d4a2c657194c4b06525c9086e0cbf13a4e879: Status 404 returned error can't find the container with id a91d3be97c90f3626a3da4fb8d5d4a2c657194c4b06525c9086e0cbf13a4e879 Feb 23 13:26:53.127934 master-0 kubenswrapper[26474]: I0223 13:26:53.127862 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" event={"ID":"e49c8de5-6755-4f26-a7d1-160c777d7565","Type":"ContainerStarted","Data":"a91d3be97c90f3626a3da4fb8d5d4a2c657194c4b06525c9086e0cbf13a4e879"} Feb 23 13:26:53.129225 master-0 kubenswrapper[26474]: I0223 13:26:53.129189 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" event={"ID":"52179181-247c-44c6-8995-b355d526ceda","Type":"ContainerStarted","Data":"7f38c4f018c12dbb789d227d200c286356fb6306756fe547e85d7ce1295682e2"} Feb 23 13:26:54.223380 master-0 kubenswrapper[26474]: I0223 13:26:54.223123 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:26:54.259230 master-0 kubenswrapper[26474]: I0223 13:26:54.256789 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:26:54.265169 master-0 kubenswrapper[26474]: I0223 13:26:54.262505 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.265666 master-0 kubenswrapper[26474]: I0223 13:26:54.265616 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:26:54.405348 master-0 kubenswrapper[26474]: I0223 13:26:54.405280 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dlp\" (UniqueName: \"kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.405582 master-0 kubenswrapper[26474]: I0223 13:26:54.405508 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.405582 master-0 kubenswrapper[26474]: I0223 13:26:54.405543 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.508780 master-0 kubenswrapper[26474]: I0223 13:26:54.508677 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.508780 master-0 kubenswrapper[26474]: I0223 13:26:54.508744 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.508997 master-0 kubenswrapper[26474]: I0223 13:26:54.508846 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dlp\" (UniqueName: \"kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.511015 master-0 kubenswrapper[26474]: I0223 13:26:54.510959 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.525766 master-0 kubenswrapper[26474]: I0223 13:26:54.511715 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.534917 master-0 kubenswrapper[26474]: I0223 13:26:54.534866 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dlp\" (UniqueName: \"kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp\") pod \"dnsmasq-dns-6974cff98c-lcbqd\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.579978 master-0 kubenswrapper[26474]: I0223 13:26:54.579229 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:26:54.622559 master-0 kubenswrapper[26474]: I0223 13:26:54.620147 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:26:54.648738 master-0 kubenswrapper[26474]: I0223 13:26:54.647779 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:26:54.649902 master-0 kubenswrapper[26474]: I0223 13:26:54.649861 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.684945 master-0 kubenswrapper[26474]: I0223 13:26:54.684889 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:26:54.830585 master-0 kubenswrapper[26474]: I0223 13:26:54.830519 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.830803 master-0 kubenswrapper[26474]: I0223 13:26:54.830670 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.830844 master-0 kubenswrapper[26474]: I0223 13:26:54.830809 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mqh\" (UniqueName: \"kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.935252 master-0 kubenswrapper[26474]: I0223 13:26:54.934010 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.935252 master-0 kubenswrapper[26474]: I0223 13:26:54.934169 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62mqh\" (UniqueName: \"kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.935252 master-0 kubenswrapper[26474]: I0223 13:26:54.934241 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.935252 master-0 kubenswrapper[26474]: I0223 13:26:54.935040 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.937711 master-0 kubenswrapper[26474]: I0223 13:26:54.935381 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.959794 master-0 kubenswrapper[26474]: I0223 13:26:54.959615 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62mqh\" (UniqueName: \"kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh\") pod \"dnsmasq-dns-7c45d57b9c-cd2wc\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:54.995519 master-0 kubenswrapper[26474]: I0223 13:26:54.994488 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:26:55.243442 master-0 kubenswrapper[26474]: I0223 13:26:55.243365 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:26:55.258244 master-0 kubenswrapper[26474]: W0223 13:26:55.258194 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b91121_7f3b_4ab5_83f5_ee336da9e897.slice/crio-0663a70e704e94bf590b0d74b3d41a36240a407d86ecfe911b2160aa9a327086 WatchSource:0}: Error finding container 0663a70e704e94bf590b0d74b3d41a36240a407d86ecfe911b2160aa9a327086: Status 404 returned error can't find the container with id 0663a70e704e94bf590b0d74b3d41a36240a407d86ecfe911b2160aa9a327086 Feb 23 13:26:55.504934 master-0 kubenswrapper[26474]: I0223 13:26:55.504794 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:26:55.517600 master-0 kubenswrapper[26474]: W0223 13:26:55.517530 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3078ad03_a115_4907_840e_a5c5057bed71.slice/crio-70e9a550a818928642f7b9790faed54147a53d611f4c6fa571ac954e25b58e3e WatchSource:0}: Error finding container 70e9a550a818928642f7b9790faed54147a53d611f4c6fa571ac954e25b58e3e: Status 404 returned error can't find the container with id 70e9a550a818928642f7b9790faed54147a53d611f4c6fa571ac954e25b58e3e Feb 23 13:26:56.201554 master-0 kubenswrapper[26474]: I0223 13:26:56.201472 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" event={"ID":"26b91121-7f3b-4ab5-83f5-ee336da9e897","Type":"ContainerStarted","Data":"0663a70e704e94bf590b0d74b3d41a36240a407d86ecfe911b2160aa9a327086"} Feb 23 13:26:56.206798 master-0 kubenswrapper[26474]: I0223 13:26:56.203874 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" event={"ID":"3078ad03-a115-4907-840e-a5c5057bed71","Type":"ContainerStarted","Data":"70e9a550a818928642f7b9790faed54147a53d611f4c6fa571ac954e25b58e3e"} Feb 23 13:26:58.475148 master-0 kubenswrapper[26474]: I0223 13:26:58.475070 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:26:58.479136 master-0 kubenswrapper[26474]: I0223 13:26:58.479017 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.485782 master-0 kubenswrapper[26474]: I0223 13:26:58.485713 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 13:26:58.486061 master-0 kubenswrapper[26474]: I0223 13:26:58.486038 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 13:26:58.486204 master-0 kubenswrapper[26474]: I0223 13:26:58.486170 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 13:26:58.488059 master-0 kubenswrapper[26474]: I0223 13:26:58.487875 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 13:26:58.488059 master-0 kubenswrapper[26474]: I0223 13:26:58.488042 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 13:26:58.488705 master-0 kubenswrapper[26474]: I0223 13:26:58.488250 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 13:26:58.497529 master-0 kubenswrapper[26474]: I0223 13:26:58.497027 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:26:58.533490 master-0 kubenswrapper[26474]: I0223 13:26:58.533420 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533605 master-0 kubenswrapper[26474]: I0223 13:26:58.533506 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08e48693-a2aa-426e-9718-5484046f9a4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533605 master-0 kubenswrapper[26474]: I0223 13:26:58.533533 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrqt\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-kube-api-access-ctrqt\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533605 master-0 kubenswrapper[26474]: I0223 13:26:58.533565 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533605 master-0 kubenswrapper[26474]: I0223 13:26:58.533588 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533734 master-0 kubenswrapper[26474]: I0223 13:26:58.533626 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533734 master-0 kubenswrapper[26474]: I0223 13:26:58.533654 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b80e3101-82ca-481d-b34d-1d93cd525ec7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2ef95af2-2b59-4aad-ae92-69cdffdc655f\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533734 master-0 kubenswrapper[26474]: I0223 13:26:58.533675 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08e48693-a2aa-426e-9718-5484046f9a4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533734 master-0 kubenswrapper[26474]: I0223 13:26:58.533705 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533734 master-0 kubenswrapper[26474]: I0223 13:26:58.533727 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.533901 master-0 kubenswrapper[26474]: I0223 13:26:58.533752 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634770 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634843 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b80e3101-82ca-481d-b34d-1d93cd525ec7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2ef95af2-2b59-4aad-ae92-69cdffdc655f\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08e48693-a2aa-426e-9718-5484046f9a4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634916 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634941 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.634958 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.635000 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.635030 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08e48693-a2aa-426e-9718-5484046f9a4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.635053 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrqt\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-kube-api-access-ctrqt\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.635078 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.635390 master-0 kubenswrapper[26474]: I0223 13:26:58.635103 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.636797 master-0 kubenswrapper[26474]: I0223 13:26:58.636771 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.637330 master-0 kubenswrapper[26474]: I0223 13:26:58.637303 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.662219 master-0 kubenswrapper[26474]: I0223 13:26:58.647446 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.662219 master-0 kubenswrapper[26474]: I0223 13:26:58.648059 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.662219 master-0 kubenswrapper[26474]: I0223 13:26:58.648668 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08e48693-a2aa-426e-9718-5484046f9a4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.662219 master-0 kubenswrapper[26474]: I0223 13:26:58.651093 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08e48693-a2aa-426e-9718-5484046f9a4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.662219 master-0 kubenswrapper[26474]: I0223 13:26:58.661035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08e48693-a2aa-426e-9718-5484046f9a4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.676359 master-0 kubenswrapper[26474]: I0223 13:26:58.671732 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:26:58.676359 master-0 kubenswrapper[26474]: I0223 13:26:58.671780 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b80e3101-82ca-481d-b34d-1d93cd525ec7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2ef95af2-2b59-4aad-ae92-69cdffdc655f\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/895fdf212a3eadb0b77367586c23c31bb9721ffabae38dca43244cde9d10cc09/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.676359 master-0 kubenswrapper[26474]: I0223 13:26:58.674179 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.711363 master-0 kubenswrapper[26474]: I0223 13:26:58.705448 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrqt\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-kube-api-access-ctrqt\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.731441 master-0 kubenswrapper[26474]: I0223 13:26:58.713325 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08e48693-a2aa-426e-9718-5484046f9a4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:26:58.846518 master-0 kubenswrapper[26474]: I0223 13:26:58.846440 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 13:26:58.849058 master-0 kubenswrapper[26474]: I0223 13:26:58.848007 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 13:26:58.866570 master-0 kubenswrapper[26474]: I0223 13:26:58.864934 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 13:26:58.866570 master-0 kubenswrapper[26474]: I0223 13:26:58.865510 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 13:26:58.868959 master-0 kubenswrapper[26474]: I0223 13:26:58.868873 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 13:26:58.886365 master-0 kubenswrapper[26474]: I0223 13:26:58.886235 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 13:26:58.955006 master-0 kubenswrapper[26474]: I0223 13:26:58.954903 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-kolla-config\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:58.955236 master-0 kubenswrapper[26474]: I0223 13:26:58.955037 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-config-data\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:58.955236 master-0 kubenswrapper[26474]: I0223 13:26:58.955118 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:58.955315 master-0 kubenswrapper[26474]: I0223 13:26:58.955266 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b48ht\" (UniqueName: \"kubernetes.io/projected/bc6581a5-b49c-4ad1-abd5-cfd583858288-kube-api-access-b48ht\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:58.955565 master-0 kubenswrapper[26474]: I0223 13:26:58.955533 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.056882 master-0 kubenswrapper[26474]: I0223 13:26:59.056803 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-config-data\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.057104 master-0 kubenswrapper[26474]: I0223 13:26:59.057037 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.057443 master-0 kubenswrapper[26474]: I0223 13:26:59.057407 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b48ht\" (UniqueName: \"kubernetes.io/projected/bc6581a5-b49c-4ad1-abd5-cfd583858288-kube-api-access-b48ht\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.057541 master-0 kubenswrapper[26474]: I0223 13:26:59.057516 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.057581 master-0 kubenswrapper[26474]: I0223 13:26:59.057547 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-kolla-config\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.057955 master-0 kubenswrapper[26474]: I0223 13:26:59.057864 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-config-data\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.058732 master-0 kubenswrapper[26474]: I0223 13:26:59.058681 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc6581a5-b49c-4ad1-abd5-cfd583858288-kolla-config\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.064174 master-0 kubenswrapper[26474]: I0223 13:26:59.064101 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.067490 master-0 kubenswrapper[26474]: I0223 13:26:59.067442 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6581a5-b49c-4ad1-abd5-cfd583858288-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.076688 master-0 kubenswrapper[26474]: I0223 13:26:59.076635 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b48ht\" (UniqueName: \"kubernetes.io/projected/bc6581a5-b49c-4ad1-abd5-cfd583858288-kube-api-access-b48ht\") pod \"memcached-0\" (UID: \"bc6581a5-b49c-4ad1-abd5-cfd583858288\") " pod="openstack/memcached-0" Feb 23 13:26:59.186032 master-0 kubenswrapper[26474]: I0223 13:26:59.185530 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 13:26:59.767975 master-0 kubenswrapper[26474]: I0223 13:26:59.764300 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:26:59.768818 master-0 kubenswrapper[26474]: I0223 13:26:59.768665 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.771297 master-0 kubenswrapper[26474]: I0223 13:26:59.770846 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 13:26:59.771297 master-0 kubenswrapper[26474]: I0223 13:26:59.770834 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 13:26:59.771297 master-0 kubenswrapper[26474]: I0223 13:26:59.771295 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 13:26:59.771747 master-0 kubenswrapper[26474]: I0223 13:26:59.771726 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 13:26:59.772273 master-0 kubenswrapper[26474]: I0223 13:26:59.772124 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 13:26:59.778477 master-0 kubenswrapper[26474]: I0223 13:26:59.778295 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 13:26:59.800883 master-0 kubenswrapper[26474]: I0223 13:26:59.800811 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:26:59.873150 master-0 kubenswrapper[26474]: I0223 13:26:59.873066 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873202 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873274 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9502e2b0-2a39-47b6-b482-f13048ccdf41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873331 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873398 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873422 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873483 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873525 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873550 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c41ace6c-11d2-4266-8589-b50b401833ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b9502f9-b590-48ba-8a97-6dce9707ed37\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873595 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d2m\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-kube-api-access-d5d2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.874226 master-0 kubenswrapper[26474]: I0223 13:26:59.873619 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9502e2b0-2a39-47b6-b482-f13048ccdf41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975626 master-0 kubenswrapper[26474]: I0223 13:26:59.975478 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9502e2b0-2a39-47b6-b482-f13048ccdf41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975626 master-0 kubenswrapper[26474]: I0223 13:26:59.975589 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975626 master-0 kubenswrapper[26474]: I0223 13:26:59.975617 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975640 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975685 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975713 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975732 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c41ace6c-11d2-4266-8589-b50b401833ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b9502f9-b590-48ba-8a97-6dce9707ed37\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975766 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5d2m\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-kube-api-access-d5d2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975784 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9502e2b0-2a39-47b6-b482-f13048ccdf41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975819 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.975893 master-0 kubenswrapper[26474]: I0223 13:26:59.975867 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.977581 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.978180 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.981896 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.982254 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.987234 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9502e2b0-2a39-47b6-b482-f13048ccdf41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.988390 master-0 kubenswrapper[26474]: I0223 13:26:59.988081 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.989648 master-0 kubenswrapper[26474]: I0223 13:26:59.988975 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:26:59.989648 master-0 kubenswrapper[26474]: I0223 13:26:59.989000 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c41ace6c-11d2-4266-8589-b50b401833ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b9502f9-b590-48ba-8a97-6dce9707ed37\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2a64eb82f8f0a6e1a52c7fce9aa11712b476696be83fbfacb823e7c8039463a8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.989779 master-0 kubenswrapper[26474]: I0223 13:26:59.989733 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9502e2b0-2a39-47b6-b482-f13048ccdf41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:59.995400 master-0 kubenswrapper[26474]: I0223 13:26:59.995315 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:00.010407 master-0 kubenswrapper[26474]: I0223 13:27:00.005600 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9502e2b0-2a39-47b6-b482-f13048ccdf41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:00.027301 master-0 kubenswrapper[26474]: I0223 13:27:00.027256 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5d2m\" (UniqueName: \"kubernetes.io/projected/9502e2b0-2a39-47b6-b482-f13048ccdf41-kube-api-access-d5d2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:00.155760 master-0 kubenswrapper[26474]: I0223 13:27:00.155606 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:27:00.157896 master-0 kubenswrapper[26474]: I0223 13:27:00.157870 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 13:27:00.165680 master-0 kubenswrapper[26474]: I0223 13:27:00.165639 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 13:27:00.174651 master-0 kubenswrapper[26474]: I0223 13:27:00.174547 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 13:27:00.178547 master-0 kubenswrapper[26474]: I0223 13:27:00.177777 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 13:27:00.191867 master-0 kubenswrapper[26474]: I0223 13:27:00.191786 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:27:00.292706 master-0 kubenswrapper[26474]: I0223 13:27:00.292647 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.292919 master-0 kubenswrapper[26474]: I0223 13:27:00.292730 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.292919 master-0 kubenswrapper[26474]: I0223 13:27:00.292765 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.292919 master-0 kubenswrapper[26474]: I0223 13:27:00.292792 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.293013 master-0 kubenswrapper[26474]: I0223 13:27:00.292926 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl67p\" (UniqueName: \"kubernetes.io/projected/7466826b-28a5-465e-9f60-484489173aa4-kube-api-access-vl67p\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.293013 master-0 kubenswrapper[26474]: I0223 13:27:00.292988 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.293080 master-0 kubenswrapper[26474]: I0223 13:27:00.293031 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7466826b-28a5-465e-9f60-484489173aa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.293080 master-0 kubenswrapper[26474]: I0223 13:27:00.293059 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-80183abd-4628-41ab-8080-87f1b17c72d1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b12a0af-1fc8-4dcf-a61f-bb7c6dc5b53b\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.357727 master-0 kubenswrapper[26474]: I0223 13:27:00.356231 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b80e3101-82ca-481d-b34d-1d93cd525ec7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2ef95af2-2b59-4aad-ae92-69cdffdc655f\") pod \"rabbitmq-server-0\" (UID: \"08e48693-a2aa-426e-9718-5484046f9a4e\") " pod="openstack/rabbitmq-server-0" Feb 23 13:27:00.396250 master-0 kubenswrapper[26474]: I0223 13:27:00.396130 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.396657 master-0 kubenswrapper[26474]: I0223 13:27:00.396585 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.396821 master-0 kubenswrapper[26474]: I0223 13:27:00.396756 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.396983 master-0 kubenswrapper[26474]: I0223 13:27:00.396795 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl67p\" (UniqueName: \"kubernetes.io/projected/7466826b-28a5-465e-9f60-484489173aa4-kube-api-access-vl67p\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.397231 master-0 kubenswrapper[26474]: I0223 13:27:00.397156 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.398266 master-0 kubenswrapper[26474]: I0223 13:27:00.398195 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7466826b-28a5-465e-9f60-484489173aa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.398559 master-0 kubenswrapper[26474]: I0223 13:27:00.398482 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-80183abd-4628-41ab-8080-87f1b17c72d1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b12a0af-1fc8-4dcf-a61f-bb7c6dc5b53b\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.398663 master-0 kubenswrapper[26474]: I0223 13:27:00.398543 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.398891 master-0 kubenswrapper[26474]: I0223 13:27:00.398843 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.399785 master-0 kubenswrapper[26474]: I0223 13:27:00.399755 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7466826b-28a5-465e-9f60-484489173aa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.400252 master-0 kubenswrapper[26474]: I0223 13:27:00.400224 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.402740 master-0 kubenswrapper[26474]: I0223 13:27:00.401924 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7466826b-28a5-465e-9f60-484489173aa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.404118 master-0 kubenswrapper[26474]: I0223 13:27:00.403477 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.404118 master-0 kubenswrapper[26474]: I0223 13:27:00.403548 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:27:00.404118 master-0 kubenswrapper[26474]: I0223 13:27:00.403574 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-80183abd-4628-41ab-8080-87f1b17c72d1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b12a0af-1fc8-4dcf-a61f-bb7c6dc5b53b\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a0df93ab3b5ad4d40b00cd34e84879303b090624cf7085702c70d6d2f49b36df/globalmount\"" pod="openstack/openstack-galera-0" Feb 23 13:27:00.408798 master-0 kubenswrapper[26474]: I0223 13:27:00.408741 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7466826b-28a5-465e-9f60-484489173aa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.421597 master-0 kubenswrapper[26474]: I0223 13:27:00.416752 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl67p\" (UniqueName: \"kubernetes.io/projected/7466826b-28a5-465e-9f60-484489173aa4-kube-api-access-vl67p\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:00.605585 master-0 kubenswrapper[26474]: I0223 13:27:00.604817 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:27:01.774103 master-0 kubenswrapper[26474]: I0223 13:27:01.774049 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c41ace6c-11d2-4266-8589-b50b401833ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b9502f9-b590-48ba-8a97-6dce9707ed37\") pod \"rabbitmq-cell1-server-0\" (UID: \"9502e2b0-2a39-47b6-b482-f13048ccdf41\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:01.904715 master-0 kubenswrapper[26474]: I0223 13:27:01.898661 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:01.948007 master-0 kubenswrapper[26474]: I0223 13:27:01.947849 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:27:01.950479 master-0 kubenswrapper[26474]: I0223 13:27:01.950372 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:01.957503 master-0 kubenswrapper[26474]: I0223 13:27:01.957432 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 13:27:01.958850 master-0 kubenswrapper[26474]: I0223 13:27:01.958156 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 13:27:01.959047 master-0 kubenswrapper[26474]: I0223 13:27:01.959021 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 13:27:01.961150 master-0 kubenswrapper[26474]: I0223 13:27:01.961083 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:27:02.048848 master-0 kubenswrapper[26474]: I0223 13:27:02.048727 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2xwn\" (UniqueName: \"kubernetes.io/projected/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kube-api-access-j2xwn\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.048848 master-0 kubenswrapper[26474]: I0223 13:27:02.048848 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049471 master-0 kubenswrapper[26474]: I0223 13:27:02.048902 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c6412925-eea5-49f5-9580-36ab1efebf78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6496994-2b13-4382-b35e-699a18641950\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049471 master-0 kubenswrapper[26474]: I0223 13:27:02.048991 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049471 master-0 kubenswrapper[26474]: I0223 13:27:02.049176 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049767 master-0 kubenswrapper[26474]: I0223 13:27:02.049543 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049767 master-0 kubenswrapper[26474]: I0223 13:27:02.049609 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.049767 master-0 kubenswrapper[26474]: I0223 13:27:02.049761 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.152839 master-0 kubenswrapper[26474]: I0223 13:27:02.152734 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.152839 master-0 kubenswrapper[26474]: I0223 13:27:02.152833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.152877 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.152996 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2xwn\" (UniqueName: \"kubernetes.io/projected/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kube-api-access-j2xwn\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.153027 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.153074 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c6412925-eea5-49f5-9580-36ab1efebf78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6496994-2b13-4382-b35e-699a18641950\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.153116 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.153679 master-0 kubenswrapper[26474]: I0223 13:27:02.153211 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.154085 master-0 kubenswrapper[26474]: I0223 13:27:02.154026 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.156911 master-0 kubenswrapper[26474]: I0223 13:27:02.156864 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.157087 master-0 kubenswrapper[26474]: I0223 13:27:02.157035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.157885 master-0 kubenswrapper[26474]: I0223 13:27:02.157817 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.158417 master-0 kubenswrapper[26474]: I0223 13:27:02.158392 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:27:02.158510 master-0 kubenswrapper[26474]: I0223 13:27:02.158429 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c6412925-eea5-49f5-9580-36ab1efebf78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6496994-2b13-4382-b35e-699a18641950\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/508a7919511b5386eb814e1b281cf44274c7a8f2eb8711bc2ee01a638c80ce66/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.168951 master-0 kubenswrapper[26474]: I0223 13:27:02.159621 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.171089 master-0 kubenswrapper[26474]: I0223 13:27:02.171019 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.172516 master-0 kubenswrapper[26474]: I0223 13:27:02.172462 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2xwn\" (UniqueName: \"kubernetes.io/projected/60d0ffdf-e5a8-457e-ad9d-e23dd25679d1-kube-api-access-j2xwn\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:02.802214 master-0 kubenswrapper[26474]: I0223 13:27:02.802138 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-80183abd-4628-41ab-8080-87f1b17c72d1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b12a0af-1fc8-4dcf-a61f-bb7c6dc5b53b\") pod \"openstack-galera-0\" (UID: \"7466826b-28a5-465e-9f60-484489173aa4\") " pod="openstack/openstack-galera-0" Feb 23 13:27:02.886009 master-0 kubenswrapper[26474]: I0223 13:27:02.885952 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 13:27:03.856042 master-0 kubenswrapper[26474]: I0223 13:27:03.855947 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c6412925-eea5-49f5-9580-36ab1efebf78\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6496994-2b13-4382-b35e-699a18641950\") pod \"openstack-cell1-galera-0\" (UID: \"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:04.374063 master-0 kubenswrapper[26474]: I0223 13:27:04.373977 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:05.952150 master-0 kubenswrapper[26474]: I0223 13:27:05.952053 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qzxjb"] Feb 23 13:27:05.954162 master-0 kubenswrapper[26474]: I0223 13:27:05.954119 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:05.964132 master-0 kubenswrapper[26474]: I0223 13:27:05.964078 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 13:27:05.964368 master-0 kubenswrapper[26474]: I0223 13:27:05.964352 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 13:27:05.985524 master-0 kubenswrapper[26474]: I0223 13:27:05.970049 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hlg7s"] Feb 23 13:27:05.985524 master-0 kubenswrapper[26474]: I0223 13:27:05.972690 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:05.986687 master-0 kubenswrapper[26474]: I0223 13:27:05.985715 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb"] Feb 23 13:27:06.000702 master-0 kubenswrapper[26474]: I0223 13:27:06.000607 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlg7s"] Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048702 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkvz\" (UniqueName: \"kubernetes.io/projected/106bae0e-78dc-455a-bca3-35057d5a145a-kube-api-access-9lkvz\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048766 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-run\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048807 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-combined-ca-bundle\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048828 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b309cab5-6d67-43b3-9a21-323910978e12-scripts\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048850 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048892 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-log\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048915 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-log-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048935 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvg9\" (UniqueName: \"kubernetes.io/projected/b309cab5-6d67-43b3-9a21-323910978e12-kube-api-access-trvg9\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.048968 master-0 kubenswrapper[26474]: I0223 13:27:06.048953 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106bae0e-78dc-455a-bca3-35057d5a145a-scripts\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.049448 master-0 kubenswrapper[26474]: I0223 13:27:06.048990 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-ovn-controller-tls-certs\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.049448 master-0 kubenswrapper[26474]: I0223 13:27:06.049015 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.049448 master-0 kubenswrapper[26474]: I0223 13:27:06.049031 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-lib\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.049448 master-0 kubenswrapper[26474]: I0223 13:27:06.049085 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-etc-ovs\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.151351 master-0 kubenswrapper[26474]: I0223 13:27:06.151276 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-etc-ovs\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.151351 master-0 kubenswrapper[26474]: I0223 13:27:06.151334 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkvz\" (UniqueName: \"kubernetes.io/projected/106bae0e-78dc-455a-bca3-35057d5a145a-kube-api-access-9lkvz\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.151590 master-0 kubenswrapper[26474]: I0223 13:27:06.151375 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-run\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.151590 master-0 kubenswrapper[26474]: I0223 13:27:06.151413 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-combined-ca-bundle\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.151590 master-0 kubenswrapper[26474]: I0223 13:27:06.151432 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b309cab5-6d67-43b3-9a21-323910978e12-scripts\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.151968 master-0 kubenswrapper[26474]: I0223 13:27:06.151917 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-etc-ovs\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152032 master-0 kubenswrapper[26474]: I0223 13:27:06.152009 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152195 master-0 kubenswrapper[26474]: I0223 13:27:06.152171 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-log\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152234 master-0 kubenswrapper[26474]: I0223 13:27:06.152226 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-log-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152284 master-0 kubenswrapper[26474]: I0223 13:27:06.152247 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvg9\" (UniqueName: \"kubernetes.io/projected/b309cab5-6d67-43b3-9a21-323910978e12-kube-api-access-trvg9\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152318 master-0 kubenswrapper[26474]: I0223 13:27:06.152308 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106bae0e-78dc-455a-bca3-35057d5a145a-scripts\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152445 master-0 kubenswrapper[26474]: I0223 13:27:06.152417 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-ovn-controller-tls-certs\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152489 master-0 kubenswrapper[26474]: I0223 13:27:06.152476 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152534 master-0 kubenswrapper[26474]: I0223 13:27:06.152518 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-lib\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152667 master-0 kubenswrapper[26474]: I0223 13:27:06.152590 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.152853 master-0 kubenswrapper[26474]: I0223 13:27:06.152825 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-log\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152906 master-0 kubenswrapper[26474]: I0223 13:27:06.152877 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-lib\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.152971 master-0 kubenswrapper[26474]: I0223 13:27:06.152945 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-log-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.153562 master-0 kubenswrapper[26474]: I0223 13:27:06.153489 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b309cab5-6d67-43b3-9a21-323910978e12-var-run\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.154484 master-0 kubenswrapper[26474]: I0223 13:27:06.154459 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b309cab5-6d67-43b3-9a21-323910978e12-scripts\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.154604 master-0 kubenswrapper[26474]: I0223 13:27:06.154557 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/106bae0e-78dc-455a-bca3-35057d5a145a-var-run-ovn\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.155767 master-0 kubenswrapper[26474]: I0223 13:27:06.155722 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106bae0e-78dc-455a-bca3-35057d5a145a-scripts\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.157606 master-0 kubenswrapper[26474]: I0223 13:27:06.157567 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-ovn-controller-tls-certs\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.157902 master-0 kubenswrapper[26474]: I0223 13:27:06.157860 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106bae0e-78dc-455a-bca3-35057d5a145a-combined-ca-bundle\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.170758 master-0 kubenswrapper[26474]: I0223 13:27:06.170717 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkvz\" (UniqueName: \"kubernetes.io/projected/106bae0e-78dc-455a-bca3-35057d5a145a-kube-api-access-9lkvz\") pod \"ovn-controller-qzxjb\" (UID: \"106bae0e-78dc-455a-bca3-35057d5a145a\") " pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.179538 master-0 kubenswrapper[26474]: I0223 13:27:06.179496 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvg9\" (UniqueName: \"kubernetes.io/projected/b309cab5-6d67-43b3-9a21-323910978e12-kube-api-access-trvg9\") pod \"ovn-controller-ovs-hlg7s\" (UID: \"b309cab5-6d67-43b3-9a21-323910978e12\") " pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:06.327712 master-0 kubenswrapper[26474]: I0223 13:27:06.327623 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:06.337958 master-0 kubenswrapper[26474]: I0223 13:27:06.337915 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:07.217651 master-0 kubenswrapper[26474]: I0223 13:27:07.217572 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:27:07.226527 master-0 kubenswrapper[26474]: I0223 13:27:07.224199 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.228446 master-0 kubenswrapper[26474]: I0223 13:27:07.227102 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 13:27:07.228446 master-0 kubenswrapper[26474]: I0223 13:27:07.227603 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 13:27:07.228446 master-0 kubenswrapper[26474]: I0223 13:27:07.227610 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 13:27:07.229159 master-0 kubenswrapper[26474]: I0223 13:27:07.229133 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 13:27:07.230765 master-0 kubenswrapper[26474]: I0223 13:27:07.230737 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:27:07.279437 master-0 kubenswrapper[26474]: I0223 13:27:07.279365 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279558 master-0 kubenswrapper[26474]: I0223 13:27:07.279459 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279558 master-0 kubenswrapper[26474]: I0223 13:27:07.279526 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81bbf1a0-0b6f-4100-ac1a-1f503cc9288d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87f317ae-44a7-483f-89bb-c9dc422c8f20\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279558 master-0 kubenswrapper[26474]: I0223 13:27:07.279551 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279906 master-0 kubenswrapper[26474]: I0223 13:27:07.279598 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq59d\" (UniqueName: \"kubernetes.io/projected/52bedc33-0750-4848-8abe-20a303ef99e5-kube-api-access-fq59d\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279906 master-0 kubenswrapper[26474]: I0223 13:27:07.279646 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279906 master-0 kubenswrapper[26474]: I0223 13:27:07.279674 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.279906 master-0 kubenswrapper[26474]: I0223 13:27:07.279731 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.382603 master-0 kubenswrapper[26474]: I0223 13:27:07.382503 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384034 master-0 kubenswrapper[26474]: I0223 13:27:07.383976 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-81bbf1a0-0b6f-4100-ac1a-1f503cc9288d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87f317ae-44a7-483f-89bb-c9dc422c8f20\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384106 master-0 kubenswrapper[26474]: I0223 13:27:07.384061 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384208 master-0 kubenswrapper[26474]: I0223 13:27:07.384168 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq59d\" (UniqueName: \"kubernetes.io/projected/52bedc33-0750-4848-8abe-20a303ef99e5-kube-api-access-fq59d\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384316 master-0 kubenswrapper[26474]: I0223 13:27:07.384275 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384393 master-0 kubenswrapper[26474]: I0223 13:27:07.384367 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384516 master-0 kubenswrapper[26474]: I0223 13:27:07.384480 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.384826 master-0 kubenswrapper[26474]: I0223 13:27:07.384683 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.385772 master-0 kubenswrapper[26474]: I0223 13:27:07.385700 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.386009 master-0 kubenswrapper[26474]: I0223 13:27:07.385970 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:27:07.386050 master-0 kubenswrapper[26474]: I0223 13:27:07.386014 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-81bbf1a0-0b6f-4100-ac1a-1f503cc9288d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87f317ae-44a7-483f-89bb-c9dc422c8f20\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ca0cc191d2b9a915319396dc7f10300c61dae2821a787e964fa9d84e53a19e5e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.387524 master-0 kubenswrapper[26474]: I0223 13:27:07.387278 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.388682 master-0 kubenswrapper[26474]: I0223 13:27:07.388632 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.388791 master-0 kubenswrapper[26474]: I0223 13:27:07.388738 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bedc33-0750-4848-8abe-20a303ef99e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.389156 master-0 kubenswrapper[26474]: I0223 13:27:07.389122 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.391557 master-0 kubenswrapper[26474]: I0223 13:27:07.391485 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bedc33-0750-4848-8abe-20a303ef99e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:07.420221 master-0 kubenswrapper[26474]: I0223 13:27:07.420146 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq59d\" (UniqueName: \"kubernetes.io/projected/52bedc33-0750-4848-8abe-20a303ef99e5-kube-api-access-fq59d\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:09.067503 master-0 kubenswrapper[26474]: I0223 13:27:09.067426 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-81bbf1a0-0b6f-4100-ac1a-1f503cc9288d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^87f317ae-44a7-483f-89bb-c9dc422c8f20\") pod \"ovsdbserver-nb-0\" (UID: \"52bedc33-0750-4848-8abe-20a303ef99e5\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:09.368502 master-0 kubenswrapper[26474]: I0223 13:27:09.365069 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:09.448802 master-0 kubenswrapper[26474]: I0223 13:27:09.447721 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:27:09.451093 master-0 kubenswrapper[26474]: I0223 13:27:09.451046 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.455386 master-0 kubenswrapper[26474]: I0223 13:27:09.455356 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 13:27:09.455650 master-0 kubenswrapper[26474]: I0223 13:27:09.455629 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 13:27:09.456850 master-0 kubenswrapper[26474]: I0223 13:27:09.456477 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 13:27:09.467229 master-0 kubenswrapper[26474]: I0223 13:27:09.467170 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:27:09.553332 master-0 kubenswrapper[26474]: I0223 13:27:09.553254 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bd67c5de-dc5d-4e4f-8bbd-cb66e89ed023\" (UniqueName: \"kubernetes.io/csi/topolvm.io^900a22e9-463c-4ab8-91e9-2f7962f7642b\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553657 master-0 kubenswrapper[26474]: I0223 13:27:09.553423 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-config\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553657 master-0 kubenswrapper[26474]: I0223 13:27:09.553452 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ck6\" (UniqueName: \"kubernetes.io/projected/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-kube-api-access-j7ck6\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553657 master-0 kubenswrapper[26474]: I0223 13:27:09.553539 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553657 master-0 kubenswrapper[26474]: I0223 13:27:09.553571 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553657 master-0 kubenswrapper[26474]: I0223 13:27:09.553648 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553828 master-0 kubenswrapper[26474]: I0223 13:27:09.553679 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.553828 master-0 kubenswrapper[26474]: I0223 13:27:09.553708 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655441 master-0 kubenswrapper[26474]: I0223 13:27:09.655288 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bd67c5de-dc5d-4e4f-8bbd-cb66e89ed023\" (UniqueName: \"kubernetes.io/csi/topolvm.io^900a22e9-463c-4ab8-91e9-2f7962f7642b\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655441 master-0 kubenswrapper[26474]: I0223 13:27:09.655424 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-config\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655441 master-0 kubenswrapper[26474]: I0223 13:27:09.655445 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ck6\" (UniqueName: \"kubernetes.io/projected/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-kube-api-access-j7ck6\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655762 master-0 kubenswrapper[26474]: I0223 13:27:09.655469 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655762 master-0 kubenswrapper[26474]: I0223 13:27:09.655494 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655762 master-0 kubenswrapper[26474]: I0223 13:27:09.655521 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655762 master-0 kubenswrapper[26474]: I0223 13:27:09.655544 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.655762 master-0 kubenswrapper[26474]: I0223 13:27:09.655568 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.656937 master-0 kubenswrapper[26474]: I0223 13:27:09.656892 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-config\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.657398 master-0 kubenswrapper[26474]: I0223 13:27:09.657371 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.658273 master-0 kubenswrapper[26474]: I0223 13:27:09.658218 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.660997 master-0 kubenswrapper[26474]: I0223 13:27:09.660702 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:27:09.660997 master-0 kubenswrapper[26474]: I0223 13:27:09.660737 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bd67c5de-dc5d-4e4f-8bbd-cb66e89ed023\" (UniqueName: \"kubernetes.io/csi/topolvm.io^900a22e9-463c-4ab8-91e9-2f7962f7642b\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/cd9066af258a12c2efe2a3dec666f1051fb19efd4181b71744ac8ba2e8ccc4a3/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.662300 master-0 kubenswrapper[26474]: I0223 13:27:09.662264 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.664915 master-0 kubenswrapper[26474]: I0223 13:27:09.664830 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.674100 master-0 kubenswrapper[26474]: I0223 13:27:09.674035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:09.674662 master-0 kubenswrapper[26474]: I0223 13:27:09.674624 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ck6\" (UniqueName: \"kubernetes.io/projected/da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94-kube-api-access-j7ck6\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:11.071282 master-0 kubenswrapper[26474]: I0223 13:27:11.071215 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bd67c5de-dc5d-4e4f-8bbd-cb66e89ed023\" (UniqueName: \"kubernetes.io/csi/topolvm.io^900a22e9-463c-4ab8-91e9-2f7962f7642b\") pod \"ovsdbserver-sb-0\" (UID: \"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:11.276303 master-0 kubenswrapper[26474]: I0223 13:27:11.276231 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:11.439653 master-0 kubenswrapper[26474]: I0223 13:27:11.439100 26474 generic.go:334] "Generic (PLEG): container finished" podID="52179181-247c-44c6-8995-b355d526ceda" containerID="bce587b5ff2f96d043a6f0a26cb172c5dabd46e0c3a372d31357311936738e6a" exitCode=0 Feb 23 13:27:11.439653 master-0 kubenswrapper[26474]: I0223 13:27:11.439210 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" event={"ID":"52179181-247c-44c6-8995-b355d526ceda","Type":"ContainerDied","Data":"bce587b5ff2f96d043a6f0a26cb172c5dabd46e0c3a372d31357311936738e6a"} Feb 23 13:27:11.502807 master-0 kubenswrapper[26474]: I0223 13:27:11.502738 26474 generic.go:334] "Generic (PLEG): container finished" podID="3078ad03-a115-4907-840e-a5c5057bed71" containerID="f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1" exitCode=0 Feb 23 13:27:11.503023 master-0 kubenswrapper[26474]: I0223 13:27:11.502831 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" event={"ID":"3078ad03-a115-4907-840e-a5c5057bed71","Type":"ContainerDied","Data":"f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1"} Feb 23 13:27:11.530227 master-0 kubenswrapper[26474]: I0223 13:27:11.530164 26474 generic.go:334] "Generic (PLEG): container finished" podID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerID="055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb" exitCode=0 Feb 23 13:27:11.530372 master-0 kubenswrapper[26474]: I0223 13:27:11.530293 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" event={"ID":"26b91121-7f3b-4ab5-83f5-ee336da9e897","Type":"ContainerDied","Data":"055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb"} Feb 23 13:27:11.572016 master-0 kubenswrapper[26474]: I0223 13:27:11.571946 26474 generic.go:334] "Generic (PLEG): container finished" podID="e49c8de5-6755-4f26-a7d1-160c777d7565" containerID="fd38e1d07b4e97fffec01280018eb65883481c25bf374190fe5e37f4a9f78377" exitCode=0 Feb 23 13:27:11.572254 master-0 kubenswrapper[26474]: I0223 13:27:11.572041 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" event={"ID":"e49c8de5-6755-4f26-a7d1-160c777d7565","Type":"ContainerDied","Data":"fd38e1d07b4e97fffec01280018eb65883481c25bf374190fe5e37f4a9f78377"} Feb 23 13:27:11.603709 master-0 kubenswrapper[26474]: W0223 13:27:11.598330 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d0ffdf_e5a8_457e_ad9d_e23dd25679d1.slice/crio-ac23f4ae3b0ae68749aea7a3729e92a495009bc24f22b9e32013739c60d1f7f0 WatchSource:0}: Error finding container ac23f4ae3b0ae68749aea7a3729e92a495009bc24f22b9e32013739c60d1f7f0: Status 404 returned error can't find the container with id ac23f4ae3b0ae68749aea7a3729e92a495009bc24f22b9e32013739c60d1f7f0 Feb 23 13:27:11.603709 master-0 kubenswrapper[26474]: W0223 13:27:11.598727 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e48693_a2aa_426e_9718_5484046f9a4e.slice/crio-9d91547a968db543a8204e347623299e819676a3a2c08d568afce3284d51b61c WatchSource:0}: Error finding container 9d91547a968db543a8204e347623299e819676a3a2c08d568afce3284d51b61c: Status 404 returned error can't find the container with id 9d91547a968db543a8204e347623299e819676a3a2c08d568afce3284d51b61c Feb 23 13:27:11.603709 master-0 kubenswrapper[26474]: W0223 13:27:11.600977 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6581a5_b49c_4ad1_abd5_cfd583858288.slice/crio-c05dad8bbe998c5d17230e55c06cba680512f84eea8b0cf9c321e6d7bdbc4ce2 WatchSource:0}: Error finding container c05dad8bbe998c5d17230e55c06cba680512f84eea8b0cf9c321e6d7bdbc4ce2: Status 404 returned error can't find the container with id c05dad8bbe998c5d17230e55c06cba680512f84eea8b0cf9c321e6d7bdbc4ce2 Feb 23 13:27:11.643113 master-0 kubenswrapper[26474]: I0223 13:27:11.642850 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 13:27:11.657902 master-0 kubenswrapper[26474]: I0223 13:27:11.657832 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:27:11.665408 master-0 kubenswrapper[26474]: I0223 13:27:11.665355 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:27:11.675792 master-0 kubenswrapper[26474]: I0223 13:27:11.673672 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:27:11.801440 master-0 kubenswrapper[26474]: I0223 13:27:11.798818 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:27:11.898020 master-0 kubenswrapper[26474]: I0223 13:27:11.897900 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:27:12.122648 master-0 kubenswrapper[26474]: I0223 13:27:12.122475 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlg7s"] Feb 23 13:27:12.153185 master-0 kubenswrapper[26474]: W0223 13:27:12.153118 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb309cab5_6d67_43b3_9a21_323910978e12.slice/crio-48c18222c0b005d9dfe36e34cb4db4899ace9ca3ea0508a43cc24b8edd5376f9 WatchSource:0}: Error finding container 48c18222c0b005d9dfe36e34cb4db4899ace9ca3ea0508a43cc24b8edd5376f9: Status 404 returned error can't find the container with id 48c18222c0b005d9dfe36e34cb4db4899ace9ca3ea0508a43cc24b8edd5376f9 Feb 23 13:27:12.366464 master-0 kubenswrapper[26474]: I0223 13:27:12.365883 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:27:12.375194 master-0 kubenswrapper[26474]: I0223 13:27:12.375135 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:27:12.450961 master-0 kubenswrapper[26474]: I0223 13:27:12.450686 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb"] Feb 23 13:27:12.466875 master-0 kubenswrapper[26474]: I0223 13:27:12.466809 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppwhs\" (UniqueName: \"kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs\") pod \"e49c8de5-6755-4f26-a7d1-160c777d7565\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " Feb 23 13:27:12.467099 master-0 kubenswrapper[26474]: I0223 13:27:12.466896 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config\") pod \"e49c8de5-6755-4f26-a7d1-160c777d7565\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " Feb 23 13:27:12.467099 master-0 kubenswrapper[26474]: I0223 13:27:12.466946 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tr2\" (UniqueName: \"kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2\") pod \"52179181-247c-44c6-8995-b355d526ceda\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " Feb 23 13:27:12.467099 master-0 kubenswrapper[26474]: I0223 13:27:12.467063 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc\") pod \"e49c8de5-6755-4f26-a7d1-160c777d7565\" (UID: \"e49c8de5-6755-4f26-a7d1-160c777d7565\") " Feb 23 13:27:12.467315 master-0 kubenswrapper[26474]: I0223 13:27:12.467282 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config\") pod \"52179181-247c-44c6-8995-b355d526ceda\" (UID: \"52179181-247c-44c6-8995-b355d526ceda\") " Feb 23 13:27:12.499367 master-0 kubenswrapper[26474]: I0223 13:27:12.499244 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2" (OuterVolumeSpecName: "kube-api-access-k9tr2") pod "52179181-247c-44c6-8995-b355d526ceda" (UID: "52179181-247c-44c6-8995-b355d526ceda"). InnerVolumeSpecName "kube-api-access-k9tr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:12.501771 master-0 kubenswrapper[26474]: I0223 13:27:12.501701 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs" (OuterVolumeSpecName: "kube-api-access-ppwhs") pod "e49c8de5-6755-4f26-a7d1-160c777d7565" (UID: "e49c8de5-6755-4f26-a7d1-160c777d7565"). InnerVolumeSpecName "kube-api-access-ppwhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:12.507767 master-0 kubenswrapper[26474]: W0223 13:27:12.507687 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda0d2cda_0b9f_4cd2_a7b8_1642a7cd0f94.slice/crio-3a40a49bd2502eccf71d65d36e3ff618b4a68977707b11ddc8355f7b790ea338 WatchSource:0}: Error finding container 3a40a49bd2502eccf71d65d36e3ff618b4a68977707b11ddc8355f7b790ea338: Status 404 returned error can't find the container with id 3a40a49bd2502eccf71d65d36e3ff618b4a68977707b11ddc8355f7b790ea338 Feb 23 13:27:12.509714 master-0 kubenswrapper[26474]: I0223 13:27:12.509653 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:27:12.510767 master-0 kubenswrapper[26474]: I0223 13:27:12.510713 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e49c8de5-6755-4f26-a7d1-160c777d7565" (UID: "e49c8de5-6755-4f26-a7d1-160c777d7565"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:12.531890 master-0 kubenswrapper[26474]: I0223 13:27:12.531821 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config" (OuterVolumeSpecName: "config") pod "52179181-247c-44c6-8995-b355d526ceda" (UID: "52179181-247c-44c6-8995-b355d526ceda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:12.542189 master-0 kubenswrapper[26474]: I0223 13:27:12.542102 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config" (OuterVolumeSpecName: "config") pod "e49c8de5-6755-4f26-a7d1-160c777d7565" (UID: "e49c8de5-6755-4f26-a7d1-160c777d7565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:12.570078 master-0 kubenswrapper[26474]: I0223 13:27:12.570006 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppwhs\" (UniqueName: \"kubernetes.io/projected/e49c8de5-6755-4f26-a7d1-160c777d7565-kube-api-access-ppwhs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:12.570078 master-0 kubenswrapper[26474]: I0223 13:27:12.570056 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:12.570078 master-0 kubenswrapper[26474]: I0223 13:27:12.570069 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tr2\" (UniqueName: \"kubernetes.io/projected/52179181-247c-44c6-8995-b355d526ceda-kube-api-access-k9tr2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:12.570078 master-0 kubenswrapper[26474]: I0223 13:27:12.570078 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49c8de5-6755-4f26-a7d1-160c777d7565-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:12.570078 master-0 kubenswrapper[26474]: I0223 13:27:12.570090 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52179181-247c-44c6-8995-b355d526ceda-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:12.591005 master-0 kubenswrapper[26474]: I0223 13:27:12.590900 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" event={"ID":"26b91121-7f3b-4ab5-83f5-ee336da9e897","Type":"ContainerStarted","Data":"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a"} Feb 23 13:27:12.591586 master-0 kubenswrapper[26474]: I0223 13:27:12.591536 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:27:12.595736 master-0 kubenswrapper[26474]: I0223 13:27:12.595624 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52bedc33-0750-4848-8abe-20a303ef99e5","Type":"ContainerStarted","Data":"23b0fa6afdfa866ae521c9adeb8ec0512bc040ba6a31d45092a32dc98832eb0b"} Feb 23 13:27:12.602459 master-0 kubenswrapper[26474]: I0223 13:27:12.597792 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" event={"ID":"52179181-247c-44c6-8995-b355d526ceda","Type":"ContainerDied","Data":"7f38c4f018c12dbb789d227d200c286356fb6306756fe547e85d7ce1295682e2"} Feb 23 13:27:12.602459 master-0 kubenswrapper[26474]: I0223 13:27:12.597827 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-qgvzk" Feb 23 13:27:12.602459 master-0 kubenswrapper[26474]: I0223 13:27:12.597857 26474 scope.go:117] "RemoveContainer" containerID="bce587b5ff2f96d043a6f0a26cb172c5dabd46e0c3a372d31357311936738e6a" Feb 23 13:27:12.602459 master-0 kubenswrapper[26474]: I0223 13:27:12.601231 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" event={"ID":"3078ad03-a115-4907-840e-a5c5057bed71","Type":"ContainerStarted","Data":"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae"} Feb 23 13:27:12.602459 master-0 kubenswrapper[26474]: I0223 13:27:12.601463 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:27:12.604956 master-0 kubenswrapper[26474]: I0223 13:27:12.604915 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb" event={"ID":"106bae0e-78dc-455a-bca3-35057d5a145a","Type":"ContainerStarted","Data":"43e91e1450005c904594e456b68e386fdbc5b9b9cce0fc6d3425a9bf384f3766"} Feb 23 13:27:12.606436 master-0 kubenswrapper[26474]: I0223 13:27:12.606397 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlg7s" event={"ID":"b309cab5-6d67-43b3-9a21-323910978e12","Type":"ContainerStarted","Data":"48c18222c0b005d9dfe36e34cb4db4899ace9ca3ea0508a43cc24b8edd5376f9"} Feb 23 13:27:12.608410 master-0 kubenswrapper[26474]: I0223 13:27:12.608278 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"08e48693-a2aa-426e-9718-5484046f9a4e","Type":"ContainerStarted","Data":"9d91547a968db543a8204e347623299e819676a3a2c08d568afce3284d51b61c"} Feb 23 13:27:12.611461 master-0 kubenswrapper[26474]: I0223 13:27:12.610867 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7466826b-28a5-465e-9f60-484489173aa4","Type":"ContainerStarted","Data":"b2891b22da50561bda836108e6d97b6de7e1009a6cd2c52e91b0d573982e91b4"} Feb 23 13:27:12.636724 master-0 kubenswrapper[26474]: I0223 13:27:12.624512 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" podStartSLOduration=3.148329049 podStartE2EDuration="18.624484427s" podCreationTimestamp="2026-02-23 13:26:54 +0000 UTC" firstStartedPulling="2026-02-23 13:26:55.262068773 +0000 UTC m=+737.108576450" lastFinishedPulling="2026-02-23 13:27:10.738224151 +0000 UTC m=+752.584731828" observedRunningTime="2026-02-23 13:27:12.616001681 +0000 UTC m=+754.462509388" watchObservedRunningTime="2026-02-23 13:27:12.624484427 +0000 UTC m=+754.470992134" Feb 23 13:27:12.636724 master-0 kubenswrapper[26474]: I0223 13:27:12.630321 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" event={"ID":"e49c8de5-6755-4f26-a7d1-160c777d7565","Type":"ContainerDied","Data":"a91d3be97c90f3626a3da4fb8d5d4a2c657194c4b06525c9086e0cbf13a4e879"} Feb 23 13:27:12.636724 master-0 kubenswrapper[26474]: I0223 13:27:12.630441 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-2ngf5" Feb 23 13:27:12.646703 master-0 kubenswrapper[26474]: I0223 13:27:12.646587 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" podStartSLOduration=3.282176447 podStartE2EDuration="18.646567913s" podCreationTimestamp="2026-02-23 13:26:54 +0000 UTC" firstStartedPulling="2026-02-23 13:26:55.520407234 +0000 UTC m=+737.366914911" lastFinishedPulling="2026-02-23 13:27:10.8847987 +0000 UTC m=+752.731306377" observedRunningTime="2026-02-23 13:27:12.645139788 +0000 UTC m=+754.491647465" watchObservedRunningTime="2026-02-23 13:27:12.646567913 +0000 UTC m=+754.493075600" Feb 23 13:27:12.649206 master-0 kubenswrapper[26474]: I0223 13:27:12.649125 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9502e2b0-2a39-47b6-b482-f13048ccdf41","Type":"ContainerStarted","Data":"018d604830d1b9d5cdee5e77dcd7ab2240403f399cf88cb99727409f97db8d27"} Feb 23 13:27:12.649288 master-0 kubenswrapper[26474]: I0223 13:27:12.649238 26474 scope.go:117] "RemoveContainer" containerID="fd38e1d07b4e97fffec01280018eb65883481c25bf374190fe5e37f4a9f78377" Feb 23 13:27:12.715183 master-0 kubenswrapper[26474]: I0223 13:27:12.704386 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bc6581a5-b49c-4ad1-abd5-cfd583858288","Type":"ContainerStarted","Data":"c05dad8bbe998c5d17230e55c06cba680512f84eea8b0cf9c321e6d7bdbc4ce2"} Feb 23 13:27:12.724738 master-0 kubenswrapper[26474]: I0223 13:27:12.723641 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94","Type":"ContainerStarted","Data":"3a40a49bd2502eccf71d65d36e3ff618b4a68977707b11ddc8355f7b790ea338"} Feb 23 13:27:12.727321 master-0 kubenswrapper[26474]: I0223 13:27:12.726848 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1","Type":"ContainerStarted","Data":"ac23f4ae3b0ae68749aea7a3729e92a495009bc24f22b9e32013739c60d1f7f0"} Feb 23 13:27:12.739397 master-0 kubenswrapper[26474]: I0223 13:27:12.739316 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:27:12.766094 master-0 kubenswrapper[26474]: I0223 13:27:12.765868 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-qgvzk"] Feb 23 13:27:12.828421 master-0 kubenswrapper[26474]: I0223 13:27:12.828234 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:27:12.886631 master-0 kubenswrapper[26474]: I0223 13:27:12.886499 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-2ngf5"] Feb 23 13:27:12.976388 master-0 kubenswrapper[26474]: E0223 13:27:12.976312 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52179181_247c_44c6_8995_b355d526ceda.slice\": RecentStats: unable to find data in memory cache]" Feb 23 13:27:14.411676 master-0 kubenswrapper[26474]: I0223 13:27:14.411597 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52179181-247c-44c6-8995-b355d526ceda" path="/var/lib/kubelet/pods/52179181-247c-44c6-8995-b355d526ceda/volumes" Feb 23 13:27:14.412399 master-0 kubenswrapper[26474]: I0223 13:27:14.412359 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49c8de5-6755-4f26-a7d1-160c777d7565" path="/var/lib/kubelet/pods/e49c8de5-6755-4f26-a7d1-160c777d7565/volumes" Feb 23 13:27:19.623166 master-0 kubenswrapper[26474]: I0223 13:27:19.622844 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:27:19.997329 master-0 kubenswrapper[26474]: I0223 13:27:19.997269 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:27:20.092685 master-0 kubenswrapper[26474]: I0223 13:27:20.083072 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:27:20.092685 master-0 kubenswrapper[26474]: I0223 13:27:20.083373 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="dnsmasq-dns" containerID="cri-o://283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a" gracePeriod=10 Feb 23 13:27:20.675077 master-0 kubenswrapper[26474]: I0223 13:27:20.674982 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:27:20.795308 master-0 kubenswrapper[26474]: I0223 13:27:20.793732 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4dlp\" (UniqueName: \"kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp\") pod \"26b91121-7f3b-4ab5-83f5-ee336da9e897\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " Feb 23 13:27:20.795308 master-0 kubenswrapper[26474]: I0223 13:27:20.793890 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config\") pod \"26b91121-7f3b-4ab5-83f5-ee336da9e897\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " Feb 23 13:27:20.795308 master-0 kubenswrapper[26474]: I0223 13:27:20.794014 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc\") pod \"26b91121-7f3b-4ab5-83f5-ee336da9e897\" (UID: \"26b91121-7f3b-4ab5-83f5-ee336da9e897\") " Feb 23 13:27:20.797401 master-0 kubenswrapper[26474]: I0223 13:27:20.797245 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp" (OuterVolumeSpecName: "kube-api-access-z4dlp") pod "26b91121-7f3b-4ab5-83f5-ee336da9e897" (UID: "26b91121-7f3b-4ab5-83f5-ee336da9e897"). InnerVolumeSpecName "kube-api-access-z4dlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:20.830221 master-0 kubenswrapper[26474]: I0223 13:27:20.830144 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bc6581a5-b49c-4ad1-abd5-cfd583858288","Type":"ContainerStarted","Data":"1a0b746db0f787a53fbbf6f3d18a67d789a7984d81d4759da6e12a2e61bf0fc1"} Feb 23 13:27:20.830465 master-0 kubenswrapper[26474]: I0223 13:27:20.830293 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 13:27:20.833864 master-0 kubenswrapper[26474]: I0223 13:27:20.833817 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94","Type":"ContainerStarted","Data":"650ae2b85e02dfb452a31fb4e7e66589763a054b274b58b891d519d0c30960eb"} Feb 23 13:27:20.836980 master-0 kubenswrapper[26474]: I0223 13:27:20.836933 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb" event={"ID":"106bae0e-78dc-455a-bca3-35057d5a145a","Type":"ContainerStarted","Data":"5a141de7fb679ed99968626a778a9376d8e5b2ee5d73be894cb285be95c7a010"} Feb 23 13:27:20.837149 master-0 kubenswrapper[26474]: I0223 13:27:20.837107 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qzxjb" Feb 23 13:27:20.841457 master-0 kubenswrapper[26474]: I0223 13:27:20.840376 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlg7s" event={"ID":"b309cab5-6d67-43b3-9a21-323910978e12","Type":"ContainerStarted","Data":"b855df6e67c1eebf6e0ec4a48ab00e8301b782251d792a94ace87b3462e2ef54"} Feb 23 13:27:20.842012 master-0 kubenswrapper[26474]: I0223 13:27:20.841972 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1","Type":"ContainerStarted","Data":"d606a64f126ebfe1c0ae79d0f8f315432293e4f90520d2d1c488b3361696965f"} Feb 23 13:27:20.844294 master-0 kubenswrapper[26474]: I0223 13:27:20.844251 26474 generic.go:334] "Generic (PLEG): container finished" podID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerID="283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a" exitCode=0 Feb 23 13:27:20.844392 master-0 kubenswrapper[26474]: I0223 13:27:20.844299 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" Feb 23 13:27:20.844392 master-0 kubenswrapper[26474]: I0223 13:27:20.844324 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" event={"ID":"26b91121-7f3b-4ab5-83f5-ee336da9e897","Type":"ContainerDied","Data":"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a"} Feb 23 13:27:20.844392 master-0 kubenswrapper[26474]: I0223 13:27:20.844364 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-lcbqd" event={"ID":"26b91121-7f3b-4ab5-83f5-ee336da9e897","Type":"ContainerDied","Data":"0663a70e704e94bf590b0d74b3d41a36240a407d86ecfe911b2160aa9a327086"} Feb 23 13:27:20.844392 master-0 kubenswrapper[26474]: I0223 13:27:20.844386 26474 scope.go:117] "RemoveContainer" containerID="283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a" Feb 23 13:27:20.848677 master-0 kubenswrapper[26474]: I0223 13:27:20.847748 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26b91121-7f3b-4ab5-83f5-ee336da9e897" (UID: "26b91121-7f3b-4ab5-83f5-ee336da9e897"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:20.853751 master-0 kubenswrapper[26474]: I0223 13:27:20.853499 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52bedc33-0750-4848-8abe-20a303ef99e5","Type":"ContainerStarted","Data":"4bdf522c0bf31afc6f226da55c6e3339a5bcb0e52da727e810938652cd3a59e7"} Feb 23 13:27:20.855867 master-0 kubenswrapper[26474]: I0223 13:27:20.855801 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7466826b-28a5-465e-9f60-484489173aa4","Type":"ContainerStarted","Data":"a079c51fa9cf31ca704ae3157f009ee8281b8ed36ca488dad13f098b34781e74"} Feb 23 13:27:20.860317 master-0 kubenswrapper[26474]: I0223 13:27:20.860135 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config" (OuterVolumeSpecName: "config") pod "26b91121-7f3b-4ab5-83f5-ee336da9e897" (UID: "26b91121-7f3b-4ab5-83f5-ee336da9e897"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:20.861156 master-0 kubenswrapper[26474]: I0223 13:27:20.861076 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.483734999 podStartE2EDuration="22.861054499s" podCreationTimestamp="2026-02-23 13:26:58 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.612150359 +0000 UTC m=+753.458658036" lastFinishedPulling="2026-02-23 13:27:19.989469869 +0000 UTC m=+761.835977536" observedRunningTime="2026-02-23 13:27:20.848105665 +0000 UTC m=+762.694613342" watchObservedRunningTime="2026-02-23 13:27:20.861054499 +0000 UTC m=+762.707562176" Feb 23 13:27:20.884106 master-0 kubenswrapper[26474]: I0223 13:27:20.883987 26474 scope.go:117] "RemoveContainer" containerID="055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb" Feb 23 13:27:20.896161 master-0 kubenswrapper[26474]: I0223 13:27:20.894938 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qzxjb" podStartSLOduration=8.287502515 podStartE2EDuration="15.894903661s" podCreationTimestamp="2026-02-23 13:27:05 +0000 UTC" firstStartedPulling="2026-02-23 13:27:12.433262165 +0000 UTC m=+754.279769842" lastFinishedPulling="2026-02-23 13:27:20.040663311 +0000 UTC m=+761.887170988" observedRunningTime="2026-02-23 13:27:20.882997192 +0000 UTC m=+762.729504869" watchObservedRunningTime="2026-02-23 13:27:20.894903661 +0000 UTC m=+762.741411338" Feb 23 13:27:20.902652 master-0 kubenswrapper[26474]: I0223 13:27:20.900121 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:20.902652 master-0 kubenswrapper[26474]: I0223 13:27:20.900175 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4dlp\" (UniqueName: \"kubernetes.io/projected/26b91121-7f3b-4ab5-83f5-ee336da9e897-kube-api-access-z4dlp\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:20.902652 master-0 kubenswrapper[26474]: I0223 13:27:20.900189 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b91121-7f3b-4ab5-83f5-ee336da9e897-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:20.914359 master-0 kubenswrapper[26474]: I0223 13:27:20.914207 26474 scope.go:117] "RemoveContainer" containerID="283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a" Feb 23 13:27:20.916803 master-0 kubenswrapper[26474]: E0223 13:27:20.916768 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a\": container with ID starting with 283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a not found: ID does not exist" containerID="283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a" Feb 23 13:27:20.916895 master-0 kubenswrapper[26474]: I0223 13:27:20.916812 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a"} err="failed to get container status \"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a\": rpc error: code = NotFound desc = could not find container \"283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a\": container with ID starting with 283aa36b85f2d57a14a8f12b69328c95dee78c30a27d681598284ae2d280cb3a not found: ID does not exist" Feb 23 13:27:20.916895 master-0 kubenswrapper[26474]: I0223 13:27:20.916839 26474 scope.go:117] "RemoveContainer" containerID="055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb" Feb 23 13:27:20.917606 master-0 kubenswrapper[26474]: E0223 13:27:20.917566 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb\": container with ID starting with 055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb not found: ID does not exist" containerID="055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb" Feb 23 13:27:20.917676 master-0 kubenswrapper[26474]: I0223 13:27:20.917647 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb"} err="failed to get container status \"055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb\": rpc error: code = NotFound desc = could not find container \"055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb\": container with ID starting with 055c580a3b3f77922c32e654da75182f2526db56880148e307f2d283ac86eddb not found: ID does not exist" Feb 23 13:27:21.450865 master-0 kubenswrapper[26474]: I0223 13:27:21.450787 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:27:21.460725 master-0 kubenswrapper[26474]: I0223 13:27:21.460652 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-lcbqd"] Feb 23 13:27:21.870046 master-0 kubenswrapper[26474]: I0223 13:27:21.869982 26474 generic.go:334] "Generic (PLEG): container finished" podID="b309cab5-6d67-43b3-9a21-323910978e12" containerID="b855df6e67c1eebf6e0ec4a48ab00e8301b782251d792a94ace87b3462e2ef54" exitCode=0 Feb 23 13:27:21.870579 master-0 kubenswrapper[26474]: I0223 13:27:21.870123 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlg7s" event={"ID":"b309cab5-6d67-43b3-9a21-323910978e12","Type":"ContainerDied","Data":"b855df6e67c1eebf6e0ec4a48ab00e8301b782251d792a94ace87b3462e2ef54"} Feb 23 13:27:21.874159 master-0 kubenswrapper[26474]: I0223 13:27:21.874109 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"08e48693-a2aa-426e-9718-5484046f9a4e","Type":"ContainerStarted","Data":"db122ba569957660a3e28b667ce2bd55ce0e307a2d5715926dec6313a158dbf4"} Feb 23 13:27:22.435818 master-0 kubenswrapper[26474]: I0223 13:27:22.435646 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" path="/var/lib/kubelet/pods/26b91121-7f3b-4ab5-83f5-ee336da9e897/volumes" Feb 23 13:27:22.888814 master-0 kubenswrapper[26474]: I0223 13:27:22.888692 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94","Type":"ContainerStarted","Data":"c9e581d550c49c8c95bcf00281936648492e93695a97b6d740ad5eaa9ac37e61"} Feb 23 13:27:22.890918 master-0 kubenswrapper[26474]: I0223 13:27:22.890867 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlg7s" event={"ID":"b309cab5-6d67-43b3-9a21-323910978e12","Type":"ContainerStarted","Data":"6fb4ee64210f4e86f687d598c66c99051910b717d24f04e73b2d0e2bc4444f40"} Feb 23 13:27:22.891001 master-0 kubenswrapper[26474]: I0223 13:27:22.890925 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlg7s" event={"ID":"b309cab5-6d67-43b3-9a21-323910978e12","Type":"ContainerStarted","Data":"e36a6b6bb35392c1352dfb51b74fa6360bd3020fef127085f2a26a005b2990c7"} Feb 23 13:27:22.891063 master-0 kubenswrapper[26474]: I0223 13:27:22.891035 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:22.891063 master-0 kubenswrapper[26474]: I0223 13:27:22.891057 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:22.892783 master-0 kubenswrapper[26474]: I0223 13:27:22.892741 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52bedc33-0750-4848-8abe-20a303ef99e5","Type":"ContainerStarted","Data":"de81782ebbb9844de0bfe297ff9eb2a261bf35b83dd378dc97aee474ec7c4b5d"} Feb 23 13:27:22.894513 master-0 kubenswrapper[26474]: I0223 13:27:22.894487 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9502e2b0-2a39-47b6-b482-f13048ccdf41","Type":"ContainerStarted","Data":"0cc1a2ffd006a8b964b71aa1ac5fa220e9afc8cf5659338483b95b0a2af6ee35"} Feb 23 13:27:22.916322 master-0 kubenswrapper[26474]: I0223 13:27:22.916229 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.171132399 podStartE2EDuration="15.916211185s" podCreationTimestamp="2026-02-23 13:27:07 +0000 UTC" firstStartedPulling="2026-02-23 13:27:12.51216379 +0000 UTC m=+754.358671467" lastFinishedPulling="2026-02-23 13:27:22.257242576 +0000 UTC m=+764.103750253" observedRunningTime="2026-02-23 13:27:22.908959099 +0000 UTC m=+764.755466786" watchObservedRunningTime="2026-02-23 13:27:22.916211185 +0000 UTC m=+764.762718852" Feb 23 13:27:22.935987 master-0 kubenswrapper[26474]: I0223 13:27:22.935905 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.586862985 podStartE2EDuration="17.935886483s" podCreationTimestamp="2026-02-23 13:27:05 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.90708465 +0000 UTC m=+753.753592327" lastFinishedPulling="2026-02-23 13:27:22.256108148 +0000 UTC m=+764.102615825" observedRunningTime="2026-02-23 13:27:22.931898956 +0000 UTC m=+764.778406633" watchObservedRunningTime="2026-02-23 13:27:22.935886483 +0000 UTC m=+764.782394160" Feb 23 13:27:22.988731 master-0 kubenswrapper[26474]: I0223 13:27:22.988637 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hlg7s" podStartSLOduration=10.140176405 podStartE2EDuration="17.988618734s" podCreationTimestamp="2026-02-23 13:27:05 +0000 UTC" firstStartedPulling="2026-02-23 13:27:12.155268345 +0000 UTC m=+754.001776022" lastFinishedPulling="2026-02-23 13:27:20.003710674 +0000 UTC m=+761.850218351" observedRunningTime="2026-02-23 13:27:22.98144756 +0000 UTC m=+764.827955257" watchObservedRunningTime="2026-02-23 13:27:22.988618734 +0000 UTC m=+764.835126401" Feb 23 13:27:23.277564 master-0 kubenswrapper[26474]: I0223 13:27:23.277485 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:23.319049 master-0 kubenswrapper[26474]: I0223 13:27:23.318978 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:23.903602 master-0 kubenswrapper[26474]: I0223 13:27:23.903548 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:24.366033 master-0 kubenswrapper[26474]: I0223 13:27:24.365920 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:24.366033 master-0 kubenswrapper[26474]: I0223 13:27:24.366027 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:24.443353 master-0 kubenswrapper[26474]: I0223 13:27:24.443265 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:24.917831 master-0 kubenswrapper[26474]: I0223 13:27:24.917735 26474 generic.go:334] "Generic (PLEG): container finished" podID="60d0ffdf-e5a8-457e-ad9d-e23dd25679d1" containerID="d606a64f126ebfe1c0ae79d0f8f315432293e4f90520d2d1c488b3361696965f" exitCode=0 Feb 23 13:27:24.919239 master-0 kubenswrapper[26474]: I0223 13:27:24.917826 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1","Type":"ContainerDied","Data":"d606a64f126ebfe1c0ae79d0f8f315432293e4f90520d2d1c488b3361696965f"} Feb 23 13:27:25.927923 master-0 kubenswrapper[26474]: I0223 13:27:25.927863 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d0ffdf-e5a8-457e-ad9d-e23dd25679d1","Type":"ContainerStarted","Data":"92bb1cbf70d4a170f29dce81cd4c6094faad893674dc2ffc83bfe88f0363f256"} Feb 23 13:27:25.929927 master-0 kubenswrapper[26474]: I0223 13:27:25.929891 26474 generic.go:334] "Generic (PLEG): container finished" podID="7466826b-28a5-465e-9f60-484489173aa4" containerID="a079c51fa9cf31ca704ae3157f009ee8281b8ed36ca488dad13f098b34781e74" exitCode=0 Feb 23 13:27:25.930050 master-0 kubenswrapper[26474]: I0223 13:27:25.929989 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7466826b-28a5-465e-9f60-484489173aa4","Type":"ContainerDied","Data":"a079c51fa9cf31ca704ae3157f009ee8281b8ed36ca488dad13f098b34781e74"} Feb 23 13:27:25.980772 master-0 kubenswrapper[26474]: I0223 13:27:25.980712 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 13:27:26.131000 master-0 kubenswrapper[26474]: I0223 13:27:26.130923 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.686748161 podStartE2EDuration="29.130905013s" podCreationTimestamp="2026-02-23 13:26:57 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.605523368 +0000 UTC m=+753.452031045" lastFinishedPulling="2026-02-23 13:27:20.04968022 +0000 UTC m=+761.896187897" observedRunningTime="2026-02-23 13:27:26.05209427 +0000 UTC m=+767.898601997" watchObservedRunningTime="2026-02-23 13:27:26.130905013 +0000 UTC m=+767.977412690" Feb 23 13:27:26.327435 master-0 kubenswrapper[26474]: I0223 13:27:26.327375 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 13:27:26.947506 master-0 kubenswrapper[26474]: I0223 13:27:26.947433 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7466826b-28a5-465e-9f60-484489173aa4","Type":"ContainerStarted","Data":"90b72d289fc8b5bac7674de86cefae44ad80178bd13721906188e53a46314351"} Feb 23 13:27:27.233579 master-0 kubenswrapper[26474]: I0223 13:27:27.231979 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.727637251 podStartE2EDuration="31.231942824s" podCreationTimestamp="2026-02-23 13:26:56 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.627881241 +0000 UTC m=+753.474388908" lastFinishedPulling="2026-02-23 13:27:20.132186804 +0000 UTC m=+761.978694481" observedRunningTime="2026-02-23 13:27:27.225710684 +0000 UTC m=+769.072218391" watchObservedRunningTime="2026-02-23 13:27:27.231942824 +0000 UTC m=+769.078450511" Feb 23 13:27:27.659874 master-0 kubenswrapper[26474]: I0223 13:27:27.659740 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:27.660307 master-0 kubenswrapper[26474]: E0223 13:27:27.660275 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52179181-247c-44c6-8995-b355d526ceda" containerName="init" Feb 23 13:27:27.660307 master-0 kubenswrapper[26474]: I0223 13:27:27.660302 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="52179181-247c-44c6-8995-b355d526ceda" containerName="init" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: E0223 13:27:27.660381 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49c8de5-6755-4f26-a7d1-160c777d7565" containerName="init" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: I0223 13:27:27.660396 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49c8de5-6755-4f26-a7d1-160c777d7565" containerName="init" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: E0223 13:27:27.660437 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="init" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: I0223 13:27:27.660447 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="init" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: E0223 13:27:27.660459 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="dnsmasq-dns" Feb 23 13:27:27.660465 master-0 kubenswrapper[26474]: I0223 13:27:27.660467 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="dnsmasq-dns" Feb 23 13:27:27.660712 master-0 kubenswrapper[26474]: I0223 13:27:27.660697 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b91121-7f3b-4ab5-83f5-ee336da9e897" containerName="dnsmasq-dns" Feb 23 13:27:27.660767 master-0 kubenswrapper[26474]: I0223 13:27:27.660719 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="52179181-247c-44c6-8995-b355d526ceda" containerName="init" Feb 23 13:27:27.660818 master-0 kubenswrapper[26474]: I0223 13:27:27.660777 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49c8de5-6755-4f26-a7d1-160c777d7565" containerName="init" Feb 23 13:27:27.661994 master-0 kubenswrapper[26474]: I0223 13:27:27.661961 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.667998 master-0 kubenswrapper[26474]: I0223 13:27:27.667661 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 13:27:27.673150 master-0 kubenswrapper[26474]: I0223 13:27:27.672689 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbvv\" (UniqueName: \"kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.673150 master-0 kubenswrapper[26474]: I0223 13:27:27.673071 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.673641 master-0 kubenswrapper[26474]: I0223 13:27:27.673568 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.673714 master-0 kubenswrapper[26474]: I0223 13:27:27.673700 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.681707 master-0 kubenswrapper[26474]: I0223 13:27:27.681648 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-k9gmf"] Feb 23 13:27:27.683521 master-0 kubenswrapper[26474]: I0223 13:27:27.683479 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.685444 master-0 kubenswrapper[26474]: I0223 13:27:27.685323 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 13:27:27.769129 master-0 kubenswrapper[26474]: I0223 13:27:27.769054 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:27.777866 master-0 kubenswrapper[26474]: I0223 13:27:27.776716 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsg7j\" (UniqueName: \"kubernetes.io/projected/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-kube-api-access-lsg7j\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.778193 master-0 kubenswrapper[26474]: I0223 13:27:27.778172 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.778400 master-0 kubenswrapper[26474]: I0223 13:27:27.778313 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-combined-ca-bundle\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.778525 master-0 kubenswrapper[26474]: I0223 13:27:27.778510 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.778639 master-0 kubenswrapper[26474]: I0223 13:27:27.778626 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovn-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.778728 master-0 kubenswrapper[26474]: I0223 13:27:27.778715 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.778820 master-0 kubenswrapper[26474]: I0223 13:27:27.778807 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovs-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.778905 master-0 kubenswrapper[26474]: I0223 13:27:27.778892 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.778992 master-0 kubenswrapper[26474]: I0223 13:27:27.778979 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbbvv\" (UniqueName: \"kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.779116 master-0 kubenswrapper[26474]: I0223 13:27:27.779102 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-config\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.779251 master-0 kubenswrapper[26474]: I0223 13:27:27.779203 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.779833 master-0 kubenswrapper[26474]: I0223 13:27:27.779803 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.780521 master-0 kubenswrapper[26474]: I0223 13:27:27.780503 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.791269 master-0 kubenswrapper[26474]: I0223 13:27:27.789506 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-k9gmf"] Feb 23 13:27:27.864367 master-0 kubenswrapper[26474]: I0223 13:27:27.863298 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbbvv\" (UniqueName: \"kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv\") pod \"dnsmasq-dns-5c685c7df5-z2dw7\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882386 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-config\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882505 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsg7j\" (UniqueName: \"kubernetes.io/projected/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-kube-api-access-lsg7j\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882566 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-combined-ca-bundle\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882599 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882642 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovn-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.883367 master-0 kubenswrapper[26474]: I0223 13:27:27.882676 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovs-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.890593 master-0 kubenswrapper[26474]: I0223 13:27:27.890521 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovs-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.890825 master-0 kubenswrapper[26474]: I0223 13:27:27.890678 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-ovn-rundir\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.891859 master-0 kubenswrapper[26474]: I0223 13:27:27.891828 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-config\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.900449 master-0 kubenswrapper[26474]: I0223 13:27:27.900393 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-combined-ca-bundle\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:27.923425 master-0 kubenswrapper[26474]: I0223 13:27:27.922986 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:28.001369 master-0 kubenswrapper[26474]: I0223 13:27:27.995261 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:28.308292 master-0 kubenswrapper[26474]: I0223 13:27:28.304220 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsg7j\" (UniqueName: \"kubernetes.io/projected/a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea-kube-api-access-lsg7j\") pod \"ovn-controller-metrics-k9gmf\" (UID: \"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea\") " pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:28.308292 master-0 kubenswrapper[26474]: I0223 13:27:28.306773 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-k9gmf" Feb 23 13:27:28.448950 master-0 kubenswrapper[26474]: I0223 13:27:28.448888 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:27:28.470320 master-0 kubenswrapper[26474]: I0223 13:27:28.470276 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 13:27:28.477057 master-0 kubenswrapper[26474]: I0223 13:27:28.476991 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:27:28.477317 master-0 kubenswrapper[26474]: I0223 13:27:28.477186 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 13:27:28.478044 master-0 kubenswrapper[26474]: I0223 13:27:28.477421 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 13:27:28.478256 master-0 kubenswrapper[26474]: I0223 13:27:28.478227 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 13:27:28.495500 master-0 kubenswrapper[26474]: I0223 13:27:28.495257 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:28.517353 master-0 kubenswrapper[26474]: I0223 13:27:28.517287 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.517539 master-0 kubenswrapper[26474]: I0223 13:27:28.517514 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.517618 master-0 kubenswrapper[26474]: I0223 13:27:28.517543 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.519407 master-0 kubenswrapper[26474]: I0223 13:27:28.519258 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2nq\" (UniqueName: \"kubernetes.io/projected/4e61746e-5d88-4e65-876e-940b3299dae0-kube-api-access-sh2nq\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.519407 master-0 kubenswrapper[26474]: I0223 13:27:28.519375 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-config\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.519407 master-0 kubenswrapper[26474]: I0223 13:27:28.519396 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.519586 master-0 kubenswrapper[26474]: I0223 13:27:28.519465 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-scripts\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.535758 master-0 kubenswrapper[26474]: I0223 13:27:28.535711 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:28.537816 master-0 kubenswrapper[26474]: I0223 13:27:28.537795 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.540184 master-0 kubenswrapper[26474]: I0223 13:27:28.540161 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 13:27:28.551531 master-0 kubenswrapper[26474]: I0223 13:27:28.551459 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:28.566262 master-0 kubenswrapper[26474]: I0223 13:27:28.566125 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:28.621550 master-0 kubenswrapper[26474]: I0223 13:27:28.621499 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.621671 master-0 kubenswrapper[26474]: I0223 13:27:28.621571 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxzr\" (UniqueName: \"kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.621671 master-0 kubenswrapper[26474]: I0223 13:27:28.621649 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.621740 master-0 kubenswrapper[26474]: I0223 13:27:28.621707 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.621740 master-0 kubenswrapper[26474]: I0223 13:27:28.621734 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.621798 master-0 kubenswrapper[26474]: I0223 13:27:28.621751 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.621798 master-0 kubenswrapper[26474]: I0223 13:27:28.621768 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.621798 master-0 kubenswrapper[26474]: I0223 13:27:28.621796 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh2nq\" (UniqueName: \"kubernetes.io/projected/4e61746e-5d88-4e65-876e-940b3299dae0-kube-api-access-sh2nq\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.621827 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-config\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.621843 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.621862 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.621893 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-scripts\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.622941 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-scripts\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.623488 master-0 kubenswrapper[26474]: I0223 13:27:28.623412 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.624558 master-0 kubenswrapper[26474]: I0223 13:27:28.624537 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e61746e-5d88-4e65-876e-940b3299dae0-config\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.630510 master-0 kubenswrapper[26474]: I0223 13:27:28.630484 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.630901 master-0 kubenswrapper[26474]: I0223 13:27:28.630852 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.632773 master-0 kubenswrapper[26474]: I0223 13:27:28.632719 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e61746e-5d88-4e65-876e-940b3299dae0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.638976 master-0 kubenswrapper[26474]: I0223 13:27:28.638949 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh2nq\" (UniqueName: \"kubernetes.io/projected/4e61746e-5d88-4e65-876e-940b3299dae0-kube-api-access-sh2nq\") pod \"ovn-northd-0\" (UID: \"4e61746e-5d88-4e65-876e-940b3299dae0\") " pod="openstack/ovn-northd-0" Feb 23 13:27:28.725421 master-0 kubenswrapper[26474]: I0223 13:27:28.725216 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.725421 master-0 kubenswrapper[26474]: I0223 13:27:28.724327 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.725421 master-0 kubenswrapper[26474]: I0223 13:27:28.725307 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.725421 master-0 kubenswrapper[26474]: I0223 13:27:28.725381 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.726257 master-0 kubenswrapper[26474]: I0223 13:27:28.726220 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.726978 master-0 kubenswrapper[26474]: I0223 13:27:28.726890 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.727399 master-0 kubenswrapper[26474]: I0223 13:27:28.727367 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.727480 master-0 kubenswrapper[26474]: I0223 13:27:28.725499 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.727480 master-0 kubenswrapper[26474]: I0223 13:27:28.727460 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxzr\" (UniqueName: \"kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.750428 master-0 kubenswrapper[26474]: I0223 13:27:28.749510 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxzr\" (UniqueName: \"kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr\") pod \"dnsmasq-dns-65c6cc445f-xzggb\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:28.804949 master-0 kubenswrapper[26474]: I0223 13:27:28.804842 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 13:27:28.874539 master-0 kubenswrapper[26474]: I0223 13:27:28.874279 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-k9gmf"] Feb 23 13:27:28.878097 master-0 kubenswrapper[26474]: I0223 13:27:28.878030 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:29.154405 master-0 kubenswrapper[26474]: I0223 13:27:29.135609 26474 generic.go:334] "Generic (PLEG): container finished" podID="75e18619-a50b-4493-81a7-3b8566c38586" containerID="594eb84dd1c9a96a54fda5eaab3f6727518dcaaa7471ab972b9fe972cc30fae8" exitCode=0 Feb 23 13:27:29.154405 master-0 kubenswrapper[26474]: I0223 13:27:29.135712 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" event={"ID":"75e18619-a50b-4493-81a7-3b8566c38586","Type":"ContainerDied","Data":"594eb84dd1c9a96a54fda5eaab3f6727518dcaaa7471ab972b9fe972cc30fae8"} Feb 23 13:27:29.154405 master-0 kubenswrapper[26474]: I0223 13:27:29.135739 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" event={"ID":"75e18619-a50b-4493-81a7-3b8566c38586","Type":"ContainerStarted","Data":"7315186ebed0b51bd374a48eb146d227674f367f944df16155c9dfa6445f0af3"} Feb 23 13:27:29.177378 master-0 kubenswrapper[26474]: I0223 13:27:29.167233 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-k9gmf" event={"ID":"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea","Type":"ContainerStarted","Data":"004ec9c5051c9fcb78677d7d4732b4071c73a10c0527f9dd596491eb0dd1bc31"} Feb 23 13:27:29.200566 master-0 kubenswrapper[26474]: I0223 13:27:29.189670 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 13:27:29.342751 master-0 kubenswrapper[26474]: I0223 13:27:29.342685 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:27:29.594209 master-0 kubenswrapper[26474]: I0223 13:27:29.594141 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:29.601892 master-0 kubenswrapper[26474]: W0223 13:27:29.601840 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2883846_79db_42b0_bd78_b482ed44f74d.slice/crio-0c5937276f7998a393106c599b4b0a3d8cdb33d0d0565fc3360a306d0c7c39c2 WatchSource:0}: Error finding container 0c5937276f7998a393106c599b4b0a3d8cdb33d0d0565fc3360a306d0c7c39c2: Status 404 returned error can't find the container with id 0c5937276f7998a393106c599b4b0a3d8cdb33d0d0565fc3360a306d0c7c39c2 Feb 23 13:27:29.693672 master-0 kubenswrapper[26474]: I0223 13:27:29.693627 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:29.854313 master-0 kubenswrapper[26474]: I0223 13:27:29.854250 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbbvv\" (UniqueName: \"kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv\") pod \"75e18619-a50b-4493-81a7-3b8566c38586\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " Feb 23 13:27:29.857806 master-0 kubenswrapper[26474]: I0223 13:27:29.856176 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config\") pod \"75e18619-a50b-4493-81a7-3b8566c38586\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " Feb 23 13:27:29.859091 master-0 kubenswrapper[26474]: I0223 13:27:29.859011 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb\") pod \"75e18619-a50b-4493-81a7-3b8566c38586\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " Feb 23 13:27:29.859176 master-0 kubenswrapper[26474]: I0223 13:27:29.859152 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc\") pod \"75e18619-a50b-4493-81a7-3b8566c38586\" (UID: \"75e18619-a50b-4493-81a7-3b8566c38586\") " Feb 23 13:27:29.875760 master-0 kubenswrapper[26474]: I0223 13:27:29.875666 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv" (OuterVolumeSpecName: "kube-api-access-pbbvv") pod "75e18619-a50b-4493-81a7-3b8566c38586" (UID: "75e18619-a50b-4493-81a7-3b8566c38586"). InnerVolumeSpecName "kube-api-access-pbbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:29.882328 master-0 kubenswrapper[26474]: I0223 13:27:29.881370 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbbvv\" (UniqueName: \"kubernetes.io/projected/75e18619-a50b-4493-81a7-3b8566c38586-kube-api-access-pbbvv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:29.890487 master-0 kubenswrapper[26474]: I0223 13:27:29.888321 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75e18619-a50b-4493-81a7-3b8566c38586" (UID: "75e18619-a50b-4493-81a7-3b8566c38586"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:29.894388 master-0 kubenswrapper[26474]: I0223 13:27:29.894304 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75e18619-a50b-4493-81a7-3b8566c38586" (UID: "75e18619-a50b-4493-81a7-3b8566c38586"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:29.908671 master-0 kubenswrapper[26474]: I0223 13:27:29.908463 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config" (OuterVolumeSpecName: "config") pod "75e18619-a50b-4493-81a7-3b8566c38586" (UID: "75e18619-a50b-4493-81a7-3b8566c38586"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:29.987857 master-0 kubenswrapper[26474]: I0223 13:27:29.987793 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:29.987857 master-0 kubenswrapper[26474]: I0223 13:27:29.987851 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:29.987857 master-0 kubenswrapper[26474]: I0223 13:27:29.987865 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e18619-a50b-4493-81a7-3b8566c38586-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:30.179723 master-0 kubenswrapper[26474]: I0223 13:27:30.178618 26474 generic.go:334] "Generic (PLEG): container finished" podID="c2883846-79db-42b0-bd78-b482ed44f74d" containerID="f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8" exitCode=0 Feb 23 13:27:30.179723 master-0 kubenswrapper[26474]: I0223 13:27:30.178720 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" event={"ID":"c2883846-79db-42b0-bd78-b482ed44f74d","Type":"ContainerDied","Data":"f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8"} Feb 23 13:27:30.179723 master-0 kubenswrapper[26474]: I0223 13:27:30.178798 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" event={"ID":"c2883846-79db-42b0-bd78-b482ed44f74d","Type":"ContainerStarted","Data":"0c5937276f7998a393106c599b4b0a3d8cdb33d0d0565fc3360a306d0c7c39c2"} Feb 23 13:27:30.182406 master-0 kubenswrapper[26474]: I0223 13:27:30.182362 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-k9gmf" event={"ID":"a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea","Type":"ContainerStarted","Data":"ee9af699c0a601adb84ccee91e571c5144fc46a30554b90cec3fadd18ac68adc"} Feb 23 13:27:30.184611 master-0 kubenswrapper[26474]: I0223 13:27:30.184569 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" event={"ID":"75e18619-a50b-4493-81a7-3b8566c38586","Type":"ContainerDied","Data":"7315186ebed0b51bd374a48eb146d227674f367f944df16155c9dfa6445f0af3"} Feb 23 13:27:30.184689 master-0 kubenswrapper[26474]: I0223 13:27:30.184613 26474 scope.go:117] "RemoveContainer" containerID="594eb84dd1c9a96a54fda5eaab3f6727518dcaaa7471ab972b9fe972cc30fae8" Feb 23 13:27:30.184689 master-0 kubenswrapper[26474]: I0223 13:27:30.184620 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-z2dw7" Feb 23 13:27:30.186046 master-0 kubenswrapper[26474]: I0223 13:27:30.186024 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4e61746e-5d88-4e65-876e-940b3299dae0","Type":"ContainerStarted","Data":"61b1e28f59dce17ef54e3f473168cad2f0c15ffe8a4324c69a5a41d0c297d3f6"} Feb 23 13:27:30.231741 master-0 kubenswrapper[26474]: I0223 13:27:30.231619 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-k9gmf" podStartSLOduration=3.231597732 podStartE2EDuration="3.231597732s" podCreationTimestamp="2026-02-23 13:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:30.223445644 +0000 UTC m=+772.069953351" watchObservedRunningTime="2026-02-23 13:27:30.231597732 +0000 UTC m=+772.078105409" Feb 23 13:27:30.296044 master-0 kubenswrapper[26474]: I0223 13:27:30.295967 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:30.316419 master-0 kubenswrapper[26474]: I0223 13:27:30.314522 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-z2dw7"] Feb 23 13:27:30.421752 master-0 kubenswrapper[26474]: I0223 13:27:30.421692 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e18619-a50b-4493-81a7-3b8566c38586" path="/var/lib/kubelet/pods/75e18619-a50b-4493-81a7-3b8566c38586/volumes" Feb 23 13:27:31.158404 master-0 kubenswrapper[26474]: I0223 13:27:31.158330 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:31.218934 master-0 kubenswrapper[26474]: I0223 13:27:31.218102 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:27:31.218934 master-0 kubenswrapper[26474]: E0223 13:27:31.218630 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e18619-a50b-4493-81a7-3b8566c38586" containerName="init" Feb 23 13:27:31.218934 master-0 kubenswrapper[26474]: I0223 13:27:31.218645 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e18619-a50b-4493-81a7-3b8566c38586" containerName="init" Feb 23 13:27:31.218934 master-0 kubenswrapper[26474]: I0223 13:27:31.218850 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e18619-a50b-4493-81a7-3b8566c38586" containerName="init" Feb 23 13:27:31.221461 master-0 kubenswrapper[26474]: I0223 13:27:31.219934 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.250214 master-0 kubenswrapper[26474]: I0223 13:27:31.246502 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:27:31.251994 master-0 kubenswrapper[26474]: I0223 13:27:31.251941 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" event={"ID":"c2883846-79db-42b0-bd78-b482ed44f74d","Type":"ContainerStarted","Data":"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6"} Feb 23 13:27:31.252854 master-0 kubenswrapper[26474]: I0223 13:27:31.252825 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:31.269122 master-0 kubenswrapper[26474]: I0223 13:27:31.268888 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4e61746e-5d88-4e65-876e-940b3299dae0","Type":"ContainerStarted","Data":"0f22f7e099442eeb5e2c7734e398cb9f3cda90f8d4709030f7b339e098e64fc7"} Feb 23 13:27:31.269475 master-0 kubenswrapper[26474]: I0223 13:27:31.269410 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 13:27:31.269559 master-0 kubenswrapper[26474]: I0223 13:27:31.269545 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4e61746e-5d88-4e65-876e-940b3299dae0","Type":"ContainerStarted","Data":"44537d4cfcd9b5f35916ed4da96cac322815e887990823974c72d9266522e309"} Feb 23 13:27:31.310043 master-0 kubenswrapper[26474]: I0223 13:27:31.304440 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" podStartSLOduration=3.304412957 podStartE2EDuration="3.304412957s" podCreationTimestamp="2026-02-23 13:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:31.288123231 +0000 UTC m=+773.134630908" watchObservedRunningTime="2026-02-23 13:27:31.304412957 +0000 UTC m=+773.150920634" Feb 23 13:27:31.319410 master-0 kubenswrapper[26474]: I0223 13:27:31.319279 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.158401395 podStartE2EDuration="3.319246528s" podCreationTimestamp="2026-02-23 13:27:28 +0000 UTC" firstStartedPulling="2026-02-23 13:27:29.379592216 +0000 UTC m=+771.226099893" lastFinishedPulling="2026-02-23 13:27:30.540437349 +0000 UTC m=+772.386945026" observedRunningTime="2026-02-23 13:27:31.309022089 +0000 UTC m=+773.155529766" watchObservedRunningTime="2026-02-23 13:27:31.319246528 +0000 UTC m=+773.165754225" Feb 23 13:27:31.319593 master-0 kubenswrapper[26474]: I0223 13:27:31.319519 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th9f9\" (UniqueName: \"kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.319631 master-0 kubenswrapper[26474]: I0223 13:27:31.319604 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.319666 master-0 kubenswrapper[26474]: I0223 13:27:31.319648 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.319740 master-0 kubenswrapper[26474]: I0223 13:27:31.319711 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.319964 master-0 kubenswrapper[26474]: I0223 13:27:31.319936 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424108 master-0 kubenswrapper[26474]: I0223 13:27:31.423037 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424108 master-0 kubenswrapper[26474]: I0223 13:27:31.423991 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424321 master-0 kubenswrapper[26474]: I0223 13:27:31.424193 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th9f9\" (UniqueName: \"kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424321 master-0 kubenswrapper[26474]: I0223 13:27:31.424242 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424321 master-0 kubenswrapper[26474]: I0223 13:27:31.424276 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.424321 master-0 kubenswrapper[26474]: I0223 13:27:31.424315 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.425580 master-0 kubenswrapper[26474]: I0223 13:27:31.425546 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.425708 master-0 kubenswrapper[26474]: I0223 13:27:31.425662 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.426258 master-0 kubenswrapper[26474]: I0223 13:27:31.426178 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.448193 master-0 kubenswrapper[26474]: I0223 13:27:31.448119 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th9f9\" (UniqueName: \"kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9\") pod \"dnsmasq-dns-5c55964f59-mfwb8\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:31.567174 master-0 kubenswrapper[26474]: I0223 13:27:31.567108 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:32.188985 master-0 kubenswrapper[26474]: I0223 13:27:32.188920 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:27:32.344898 master-0 kubenswrapper[26474]: I0223 13:27:32.344803 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" event={"ID":"018c5ca6-eb74-483c-9079-6463547e3a46","Type":"ContainerStarted","Data":"bba42275a22ff580154a84cf7eb3657f16c880ac26ea5115f1b0d361fbf24481"} Feb 23 13:27:32.345522 master-0 kubenswrapper[26474]: I0223 13:27:32.345308 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="dnsmasq-dns" containerID="cri-o://690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6" gracePeriod=10 Feb 23 13:27:32.847277 master-0 kubenswrapper[26474]: I0223 13:27:32.847211 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:32.869467 master-0 kubenswrapper[26474]: I0223 13:27:32.869369 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb\") pod \"c2883846-79db-42b0-bd78-b482ed44f74d\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " Feb 23 13:27:32.869732 master-0 kubenswrapper[26474]: I0223 13:27:32.869593 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb\") pod \"c2883846-79db-42b0-bd78-b482ed44f74d\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " Feb 23 13:27:32.869732 master-0 kubenswrapper[26474]: I0223 13:27:32.869669 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxzr\" (UniqueName: \"kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr\") pod \"c2883846-79db-42b0-bd78-b482ed44f74d\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " Feb 23 13:27:32.869732 master-0 kubenswrapper[26474]: I0223 13:27:32.869702 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc\") pod \"c2883846-79db-42b0-bd78-b482ed44f74d\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " Feb 23 13:27:32.870003 master-0 kubenswrapper[26474]: I0223 13:27:32.869763 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config\") pod \"c2883846-79db-42b0-bd78-b482ed44f74d\" (UID: \"c2883846-79db-42b0-bd78-b482ed44f74d\") " Feb 23 13:27:32.887164 master-0 kubenswrapper[26474]: I0223 13:27:32.887085 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 13:27:32.887164 master-0 kubenswrapper[26474]: I0223 13:27:32.887150 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 13:27:32.904989 master-0 kubenswrapper[26474]: I0223 13:27:32.904923 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr" (OuterVolumeSpecName: "kube-api-access-btxzr") pod "c2883846-79db-42b0-bd78-b482ed44f74d" (UID: "c2883846-79db-42b0-bd78-b482ed44f74d"). InnerVolumeSpecName "kube-api-access-btxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:32.935204 master-0 kubenswrapper[26474]: I0223 13:27:32.935141 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2883846-79db-42b0-bd78-b482ed44f74d" (UID: "c2883846-79db-42b0-bd78-b482ed44f74d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:32.943280 master-0 kubenswrapper[26474]: I0223 13:27:32.943212 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2883846-79db-42b0-bd78-b482ed44f74d" (UID: "c2883846-79db-42b0-bd78-b482ed44f74d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:32.966543 master-0 kubenswrapper[26474]: I0223 13:27:32.966097 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2883846-79db-42b0-bd78-b482ed44f74d" (UID: "c2883846-79db-42b0-bd78-b482ed44f74d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:32.967014 master-0 kubenswrapper[26474]: I0223 13:27:32.966964 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config" (OuterVolumeSpecName: "config") pod "c2883846-79db-42b0-bd78-b482ed44f74d" (UID: "c2883846-79db-42b0-bd78-b482ed44f74d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:32.975369 master-0 kubenswrapper[26474]: I0223 13:27:32.975129 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:32.975369 master-0 kubenswrapper[26474]: I0223 13:27:32.975174 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxzr\" (UniqueName: \"kubernetes.io/projected/c2883846-79db-42b0-bd78-b482ed44f74d-kube-api-access-btxzr\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:32.975369 master-0 kubenswrapper[26474]: I0223 13:27:32.975199 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:32.975369 master-0 kubenswrapper[26474]: I0223 13:27:32.975213 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:32.975369 master-0 kubenswrapper[26474]: I0223 13:27:32.975224 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2883846-79db-42b0-bd78-b482ed44f74d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:33.247799 master-0 kubenswrapper[26474]: I0223 13:27:33.247737 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:27:33.248204 master-0 kubenswrapper[26474]: E0223 13:27:33.248181 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="init" Feb 23 13:27:33.248204 master-0 kubenswrapper[26474]: I0223 13:27:33.248200 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="init" Feb 23 13:27:33.248382 master-0 kubenswrapper[26474]: E0223 13:27:33.248237 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="dnsmasq-dns" Feb 23 13:27:33.248382 master-0 kubenswrapper[26474]: I0223 13:27:33.248249 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="dnsmasq-dns" Feb 23 13:27:33.248643 master-0 kubenswrapper[26474]: I0223 13:27:33.248615 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" containerName="dnsmasq-dns" Feb 23 13:27:33.260268 master-0 kubenswrapper[26474]: I0223 13:27:33.260198 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 13:27:33.263813 master-0 kubenswrapper[26474]: I0223 13:27:33.263762 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 13:27:33.264022 master-0 kubenswrapper[26474]: I0223 13:27:33.263941 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 13:27:33.264109 master-0 kubenswrapper[26474]: I0223 13:27:33.264083 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 13:27:33.266113 master-0 kubenswrapper[26474]: I0223 13:27:33.266064 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:27:33.288910 master-0 kubenswrapper[26474]: I0223 13:27:33.288809 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.288910 master-0 kubenswrapper[26474]: I0223 13:27:33.288910 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7172c21a-db8e-428a-9a0c-5ef060abafd3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.289215 master-0 kubenswrapper[26474]: I0223 13:27:33.288997 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-cache\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.289215 master-0 kubenswrapper[26474]: I0223 13:27:33.289058 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-lock\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.289314 master-0 kubenswrapper[26474]: I0223 13:27:33.289268 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-kube-api-access-rjhj5\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.289622 master-0 kubenswrapper[26474]: I0223 13:27:33.289592 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9082f270-ade7-4e10-b7c7-29833982be48\" (UniqueName: \"kubernetes.io/csi/topolvm.io^58083c89-9fa9-49be-8f16-aef8afc1c6e9\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.359226 master-0 kubenswrapper[26474]: I0223 13:27:33.359155 26474 generic.go:334] "Generic (PLEG): container finished" podID="c2883846-79db-42b0-bd78-b482ed44f74d" containerID="690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6" exitCode=0 Feb 23 13:27:33.359813 master-0 kubenswrapper[26474]: I0223 13:27:33.359242 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" Feb 23 13:27:33.359813 master-0 kubenswrapper[26474]: I0223 13:27:33.359263 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" event={"ID":"c2883846-79db-42b0-bd78-b482ed44f74d","Type":"ContainerDied","Data":"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6"} Feb 23 13:27:33.359813 master-0 kubenswrapper[26474]: I0223 13:27:33.359307 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-xzggb" event={"ID":"c2883846-79db-42b0-bd78-b482ed44f74d","Type":"ContainerDied","Data":"0c5937276f7998a393106c599b4b0a3d8cdb33d0d0565fc3360a306d0c7c39c2"} Feb 23 13:27:33.359813 master-0 kubenswrapper[26474]: I0223 13:27:33.359332 26474 scope.go:117] "RemoveContainer" containerID="690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6" Feb 23 13:27:33.368383 master-0 kubenswrapper[26474]: I0223 13:27:33.368300 26474 generic.go:334] "Generic (PLEG): container finished" podID="018c5ca6-eb74-483c-9079-6463547e3a46" containerID="ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d" exitCode=0 Feb 23 13:27:33.368461 master-0 kubenswrapper[26474]: I0223 13:27:33.368404 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" event={"ID":"018c5ca6-eb74-483c-9079-6463547e3a46","Type":"ContainerDied","Data":"ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d"} Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.392970 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.393031 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7172c21a-db8e-428a-9a0c-5ef060abafd3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.393081 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-cache\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.393111 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-lock\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.393161 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-kube-api-access-rjhj5\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.395877 master-0 kubenswrapper[26474]: I0223 13:27:33.393229 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9082f270-ade7-4e10-b7c7-29833982be48\" (UniqueName: \"kubernetes.io/csi/topolvm.io^58083c89-9fa9-49be-8f16-aef8afc1c6e9\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.399215 master-0 kubenswrapper[26474]: I0223 13:27:33.399168 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-lock\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.399354 master-0 kubenswrapper[26474]: E0223 13:27:33.399309 26474 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:27:33.399354 master-0 kubenswrapper[26474]: E0223 13:27:33.399329 26474 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:27:33.399455 master-0 kubenswrapper[26474]: E0223 13:27:33.399410 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift podName:7172c21a-db8e-428a-9a0c-5ef060abafd3 nodeName:}" failed. No retries permitted until 2026-02-23 13:27:33.89938827 +0000 UTC m=+775.745895947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift") pod "swift-storage-0" (UID: "7172c21a-db8e-428a-9a0c-5ef060abafd3") : configmap "swift-ring-files" not found Feb 23 13:27:33.400785 master-0 kubenswrapper[26474]: I0223 13:27:33.399647 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7172c21a-db8e-428a-9a0c-5ef060abafd3-cache\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.402457 master-0 kubenswrapper[26474]: I0223 13:27:33.402429 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:27:33.402537 master-0 kubenswrapper[26474]: I0223 13:27:33.402461 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9082f270-ade7-4e10-b7c7-29833982be48\" (UniqueName: \"kubernetes.io/csi/topolvm.io^58083c89-9fa9-49be-8f16-aef8afc1c6e9\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f2a65cf23c2b5d30696024b12870c3f636f849fca78895e553f21f1951dc4997/globalmount\"" pod="openstack/swift-storage-0" Feb 23 13:27:33.411228 master-0 kubenswrapper[26474]: I0223 13:27:33.411078 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7172c21a-db8e-428a-9a0c-5ef060abafd3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.425229 master-0 kubenswrapper[26474]: I0223 13:27:33.424947 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-kube-api-access-rjhj5\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.425516 master-0 kubenswrapper[26474]: I0223 13:27:33.425460 26474 scope.go:117] "RemoveContainer" containerID="f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8" Feb 23 13:27:33.431190 master-0 kubenswrapper[26474]: I0223 13:27:33.431119 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:33.444136 master-0 kubenswrapper[26474]: I0223 13:27:33.444071 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-xzggb"] Feb 23 13:27:33.560774 master-0 kubenswrapper[26474]: I0223 13:27:33.558582 26474 scope.go:117] "RemoveContainer" containerID="690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6" Feb 23 13:27:33.563965 master-0 kubenswrapper[26474]: E0223 13:27:33.563776 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6\": container with ID starting with 690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6 not found: ID does not exist" containerID="690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6" Feb 23 13:27:33.563965 master-0 kubenswrapper[26474]: I0223 13:27:33.563836 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6"} err="failed to get container status \"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6\": rpc error: code = NotFound desc = could not find container \"690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6\": container with ID starting with 690ef4a35d75669429535b01f245c879ab25e73adbe8ddfe1c502161ab9b7ce6 not found: ID does not exist" Feb 23 13:27:33.563965 master-0 kubenswrapper[26474]: I0223 13:27:33.563872 26474 scope.go:117] "RemoveContainer" containerID="f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8" Feb 23 13:27:33.574062 master-0 kubenswrapper[26474]: E0223 13:27:33.573918 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8\": container with ID starting with f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8 not found: ID does not exist" containerID="f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8" Feb 23 13:27:33.574062 master-0 kubenswrapper[26474]: I0223 13:27:33.574009 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8"} err="failed to get container status \"f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8\": rpc error: code = NotFound desc = could not find container \"f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8\": container with ID starting with f6483334d80434ac86ed816bffa9a94e89ce8bd6c114273ef3731dc8015120e8 not found: ID does not exist" Feb 23 13:27:33.923871 master-0 kubenswrapper[26474]: I0223 13:27:33.923802 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:33.924100 master-0 kubenswrapper[26474]: E0223 13:27:33.924066 26474 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:27:33.924100 master-0 kubenswrapper[26474]: E0223 13:27:33.924082 26474 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:27:33.924186 master-0 kubenswrapper[26474]: E0223 13:27:33.924125 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift podName:7172c21a-db8e-428a-9a0c-5ef060abafd3 nodeName:}" failed. No retries permitted until 2026-02-23 13:27:34.9241093 +0000 UTC m=+776.770616967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift") pod "swift-storage-0" (UID: "7172c21a-db8e-428a-9a0c-5ef060abafd3") : configmap "swift-ring-files" not found Feb 23 13:27:34.374505 master-0 kubenswrapper[26474]: I0223 13:27:34.374405 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:34.375635 master-0 kubenswrapper[26474]: I0223 13:27:34.375602 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:34.424415 master-0 kubenswrapper[26474]: I0223 13:27:34.424325 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2883846-79db-42b0-bd78-b482ed44f74d" path="/var/lib/kubelet/pods/c2883846-79db-42b0-bd78-b482ed44f74d/volumes" Feb 23 13:27:34.426018 master-0 kubenswrapper[26474]: I0223 13:27:34.425938 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" event={"ID":"018c5ca6-eb74-483c-9079-6463547e3a46","Type":"ContainerStarted","Data":"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e"} Feb 23 13:27:34.426306 master-0 kubenswrapper[26474]: I0223 13:27:34.426259 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:34.437653 master-0 kubenswrapper[26474]: I0223 13:27:34.437286 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" podStartSLOduration=3.437260618 podStartE2EDuration="3.437260618s" podCreationTimestamp="2026-02-23 13:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:34.418973564 +0000 UTC m=+776.265481261" watchObservedRunningTime="2026-02-23 13:27:34.437260618 +0000 UTC m=+776.283768335" Feb 23 13:27:34.480286 master-0 kubenswrapper[26474]: I0223 13:27:34.480119 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:34.892372 master-0 kubenswrapper[26474]: I0223 13:27:34.892256 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9082f270-ade7-4e10-b7c7-29833982be48\" (UniqueName: \"kubernetes.io/csi/topolvm.io^58083c89-9fa9-49be-8f16-aef8afc1c6e9\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:34.957714 master-0 kubenswrapper[26474]: I0223 13:27:34.957491 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:34.958027 master-0 kubenswrapper[26474]: E0223 13:27:34.957861 26474 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:27:34.958027 master-0 kubenswrapper[26474]: E0223 13:27:34.957882 26474 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:27:34.958027 master-0 kubenswrapper[26474]: E0223 13:27:34.957941 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift podName:7172c21a-db8e-428a-9a0c-5ef060abafd3 nodeName:}" failed. No retries permitted until 2026-02-23 13:27:36.957920539 +0000 UTC m=+778.804428226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift") pod "swift-storage-0" (UID: "7172c21a-db8e-428a-9a0c-5ef060abafd3") : configmap "swift-ring-files" not found Feb 23 13:27:35.194886 master-0 kubenswrapper[26474]: I0223 13:27:35.194752 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 13:27:35.274915 master-0 kubenswrapper[26474]: I0223 13:27:35.274848 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 13:27:35.480419 master-0 kubenswrapper[26474]: I0223 13:27:35.480222 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 13:27:36.188203 master-0 kubenswrapper[26474]: I0223 13:27:36.188095 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ffkfh"] Feb 23 13:27:36.190268 master-0 kubenswrapper[26474]: I0223 13:27:36.190246 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.193443 master-0 kubenswrapper[26474]: I0223 13:27:36.193419 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 13:27:36.209180 master-0 kubenswrapper[26474]: I0223 13:27:36.209082 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ffkfh"] Feb 23 13:27:36.286437 master-0 kubenswrapper[26474]: I0223 13:27:36.285614 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrt6\" (UniqueName: \"kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.286437 master-0 kubenswrapper[26474]: I0223 13:27:36.285812 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.354257 master-0 kubenswrapper[26474]: I0223 13:27:36.352558 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8zvqt"] Feb 23 13:27:36.354257 master-0 kubenswrapper[26474]: I0223 13:27:36.354172 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.359650 master-0 kubenswrapper[26474]: I0223 13:27:36.356847 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 13:27:36.359650 master-0 kubenswrapper[26474]: I0223 13:27:36.357036 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 13:27:36.359650 master-0 kubenswrapper[26474]: I0223 13:27:36.356870 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 13:27:36.367184 master-0 kubenswrapper[26474]: I0223 13:27:36.366430 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8zvqt"] Feb 23 13:27:36.387257 master-0 kubenswrapper[26474]: I0223 13:27:36.387198 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.387509 master-0 kubenswrapper[26474]: I0223 13:27:36.387298 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrt6\" (UniqueName: \"kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.388070 master-0 kubenswrapper[26474]: I0223 13:27:36.388042 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.410309 master-0 kubenswrapper[26474]: I0223 13:27:36.410243 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrt6\" (UniqueName: \"kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6\") pod \"root-account-create-update-ffkfh\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.491689 master-0 kubenswrapper[26474]: I0223 13:27:36.489523 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.491666 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.491739 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.491772 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr4hd\" (UniqueName: \"kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.492089 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.492305 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.492498 master-0 kubenswrapper[26474]: I0223 13:27:36.492439 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.539910 master-0 kubenswrapper[26474]: I0223 13:27:36.539782 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:36.594213 master-0 kubenswrapper[26474]: I0223 13:27:36.594142 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.594951 master-0 kubenswrapper[26474]: I0223 13:27:36.594875 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.595082 master-0 kubenswrapper[26474]: I0223 13:27:36.595062 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.595244 master-0 kubenswrapper[26474]: I0223 13:27:36.595221 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.595292 master-0 kubenswrapper[26474]: I0223 13:27:36.595256 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.595292 master-0 kubenswrapper[26474]: I0223 13:27:36.595280 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.595389 master-0 kubenswrapper[26474]: I0223 13:27:36.595305 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr4hd\" (UniqueName: \"kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.596978 master-0 kubenswrapper[26474]: I0223 13:27:36.596934 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.597121 master-0 kubenswrapper[26474]: I0223 13:27:36.597091 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.597227 master-0 kubenswrapper[26474]: I0223 13:27:36.597195 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.597592 master-0 kubenswrapper[26474]: I0223 13:27:36.597561 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.600127 master-0 kubenswrapper[26474]: I0223 13:27:36.600077 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.614678 master-0 kubenswrapper[26474]: I0223 13:27:36.614624 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.620195 master-0 kubenswrapper[26474]: I0223 13:27:36.620145 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr4hd\" (UniqueName: \"kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd\") pod \"swift-ring-rebalance-8zvqt\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:36.688322 master-0 kubenswrapper[26474]: I0223 13:27:36.688248 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:37.018379 master-0 kubenswrapper[26474]: I0223 13:27:37.013177 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:37.018379 master-0 kubenswrapper[26474]: E0223 13:27:37.013443 26474 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:27:37.018379 master-0 kubenswrapper[26474]: E0223 13:27:37.013511 26474 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:27:37.018379 master-0 kubenswrapper[26474]: E0223 13:27:37.013581 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift podName:7172c21a-db8e-428a-9a0c-5ef060abafd3 nodeName:}" failed. No retries permitted until 2026-02-23 13:27:41.013557017 +0000 UTC m=+782.860064694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift") pod "swift-storage-0" (UID: "7172c21a-db8e-428a-9a0c-5ef060abafd3") : configmap "swift-ring-files" not found Feb 23 13:27:37.022080 master-0 kubenswrapper[26474]: I0223 13:27:37.022020 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ffkfh"] Feb 23 13:27:37.185842 master-0 kubenswrapper[26474]: I0223 13:27:37.185660 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8zvqt"] Feb 23 13:27:37.192289 master-0 kubenswrapper[26474]: W0223 13:27:37.192102 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf760f819_5bdd_4b3b_9374_7bde76377f34.slice/crio-98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427 WatchSource:0}: Error finding container 98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427: Status 404 returned error can't find the container with id 98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427 Feb 23 13:27:37.424929 master-0 kubenswrapper[26474]: I0223 13:27:37.424815 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8zvqt" event={"ID":"f760f819-5bdd-4b3b-9374-7bde76377f34","Type":"ContainerStarted","Data":"98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427"} Feb 23 13:27:37.426566 master-0 kubenswrapper[26474]: I0223 13:27:37.426504 26474 generic.go:334] "Generic (PLEG): container finished" podID="a664e235-45e6-4261-9aa8-524ebafc8fb7" containerID="7e1b1b499cb3fd72bab3d32e99cae3503c24c12b065d2a5acb74cfdad1bbc1c4" exitCode=0 Feb 23 13:27:37.426672 master-0 kubenswrapper[26474]: I0223 13:27:37.426562 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ffkfh" event={"ID":"a664e235-45e6-4261-9aa8-524ebafc8fb7","Type":"ContainerDied","Data":"7e1b1b499cb3fd72bab3d32e99cae3503c24c12b065d2a5acb74cfdad1bbc1c4"} Feb 23 13:27:37.426672 master-0 kubenswrapper[26474]: I0223 13:27:37.426612 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ffkfh" event={"ID":"a664e235-45e6-4261-9aa8-524ebafc8fb7","Type":"ContainerStarted","Data":"05ecb0de8c575141fe4cbb764d2ef56d272177ddd5390d5ab730e9fe38e59cca"} Feb 23 13:27:39.427736 master-0 kubenswrapper[26474]: I0223 13:27:39.427648 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6wgzv"] Feb 23 13:27:39.429830 master-0 kubenswrapper[26474]: I0223 13:27:39.429803 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.475365 master-0 kubenswrapper[26474]: I0223 13:27:39.474082 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6wgzv"] Feb 23 13:27:39.531361 master-0 kubenswrapper[26474]: I0223 13:27:39.530712 26474 trace.go:236] Trace[1501212457]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (23-Feb-2026 13:27:38.376) (total time: 1154ms): Feb 23 13:27:39.531361 master-0 kubenswrapper[26474]: Trace[1501212457]: [1.154589071s] [1.154589071s] END Feb 23 13:27:39.547814 master-0 kubenswrapper[26474]: I0223 13:27:39.546957 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1b01-account-create-update-57vw2"] Feb 23 13:27:39.552372 master-0 kubenswrapper[26474]: I0223 13:27:39.548265 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.552372 master-0 kubenswrapper[26474]: I0223 13:27:39.550800 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 13:27:39.572498 master-0 kubenswrapper[26474]: I0223 13:27:39.572430 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b01-account-create-update-57vw2"] Feb 23 13:27:39.582517 master-0 kubenswrapper[26474]: I0223 13:27:39.582441 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrhz\" (UniqueName: \"kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.582758 master-0 kubenswrapper[26474]: I0223 13:27:39.582553 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.687623 master-0 kubenswrapper[26474]: I0223 13:27:39.687477 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrhz\" (UniqueName: \"kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.687623 master-0 kubenswrapper[26474]: I0223 13:27:39.687572 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.687860 master-0 kubenswrapper[26474]: I0223 13:27:39.687672 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.687860 master-0 kubenswrapper[26474]: I0223 13:27:39.687743 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw999\" (UniqueName: \"kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.688823 master-0 kubenswrapper[26474]: I0223 13:27:39.688781 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.704038 master-0 kubenswrapper[26474]: I0223 13:27:39.703993 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrhz\" (UniqueName: \"kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz\") pod \"glance-db-create-6wgzv\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.717706 master-0 kubenswrapper[26474]: I0223 13:27:39.717668 26474 trace.go:236] Trace[2045831771]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (23-Feb-2026 13:27:38.386) (total time: 1330ms): Feb 23 13:27:39.717706 master-0 kubenswrapper[26474]: Trace[2045831771]: [1.33074776s] [1.33074776s] END Feb 23 13:27:39.792015 master-0 kubenswrapper[26474]: I0223 13:27:39.791085 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.792015 master-0 kubenswrapper[26474]: I0223 13:27:39.791269 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw999\" (UniqueName: \"kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.792015 master-0 kubenswrapper[26474]: I0223 13:27:39.791942 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.809476 master-0 kubenswrapper[26474]: I0223 13:27:39.808407 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw999\" (UniqueName: \"kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999\") pod \"glance-1b01-account-create-update-57vw2\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:39.848719 master-0 kubenswrapper[26474]: I0223 13:27:39.848162 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:39.946027 master-0 kubenswrapper[26474]: I0223 13:27:39.945883 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:40.116768 master-0 kubenswrapper[26474]: I0223 13:27:40.116682 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nbjdj"] Feb 23 13:27:40.122262 master-0 kubenswrapper[26474]: I0223 13:27:40.122198 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.150631 master-0 kubenswrapper[26474]: I0223 13:27:40.150559 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbjdj"] Feb 23 13:27:40.203387 master-0 kubenswrapper[26474]: I0223 13:27:40.203208 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscc2\" (UniqueName: \"kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.203635 master-0 kubenswrapper[26474]: I0223 13:27:40.203417 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.211178 master-0 kubenswrapper[26474]: I0223 13:27:40.211125 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2a9d-account-create-update-2kcdv"] Feb 23 13:27:40.214403 master-0 kubenswrapper[26474]: I0223 13:27:40.213096 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.216245 master-0 kubenswrapper[26474]: I0223 13:27:40.216150 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 13:27:40.225044 master-0 kubenswrapper[26474]: I0223 13:27:40.224987 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a9d-account-create-update-2kcdv"] Feb 23 13:27:40.310427 master-0 kubenswrapper[26474]: I0223 13:27:40.305758 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.310427 master-0 kubenswrapper[26474]: I0223 13:27:40.305863 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9v2\" (UniqueName: \"kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.310427 master-0 kubenswrapper[26474]: I0223 13:27:40.305980 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rscc2\" (UniqueName: \"kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.310427 master-0 kubenswrapper[26474]: I0223 13:27:40.306018 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.310427 master-0 kubenswrapper[26474]: I0223 13:27:40.306748 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.408005 master-0 kubenswrapper[26474]: I0223 13:27:40.407920 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9v2\" (UniqueName: \"kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.408516 master-0 kubenswrapper[26474]: I0223 13:27:40.408033 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.408933 master-0 kubenswrapper[26474]: I0223 13:27:40.408902 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.480076 master-0 kubenswrapper[26474]: I0223 13:27:40.479923 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9v2\" (UniqueName: \"kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2\") pod \"keystone-2a9d-account-create-update-2kcdv\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.480977 master-0 kubenswrapper[26474]: I0223 13:27:40.480668 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rscc2\" (UniqueName: \"kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2\") pod \"keystone-db-create-nbjdj\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.537650 master-0 kubenswrapper[26474]: I0223 13:27:40.537560 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:40.747469 master-0 kubenswrapper[26474]: I0223 13:27:40.747350 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:40.753729 master-0 kubenswrapper[26474]: I0223 13:27:40.753614 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9kw5h"] Feb 23 13:27:40.755503 master-0 kubenswrapper[26474]: I0223 13:27:40.755454 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.816619 master-0 kubenswrapper[26474]: I0223 13:27:40.816567 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqj4s\" (UniqueName: \"kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.816739 master-0 kubenswrapper[26474]: I0223 13:27:40.816721 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.924204 master-0 kubenswrapper[26474]: I0223 13:27:40.924128 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqj4s\" (UniqueName: \"kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.924486 master-0 kubenswrapper[26474]: I0223 13:27:40.924323 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.927237 master-0 kubenswrapper[26474]: I0223 13:27:40.927181 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:40.931449 master-0 kubenswrapper[26474]: I0223 13:27:40.931378 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:40.939110 master-0 kubenswrapper[26474]: I0223 13:27:40.939053 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9kw5h"] Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: I0223 13:27:40.949014 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ea4f-account-create-update-rdkts"] Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: E0223 13:27:40.949609 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a664e235-45e6-4261-9aa8-524ebafc8fb7" containerName="mariadb-account-create-update" Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: I0223 13:27:40.949624 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a664e235-45e6-4261-9aa8-524ebafc8fb7" containerName="mariadb-account-create-update" Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: I0223 13:27:40.949880 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="a664e235-45e6-4261-9aa8-524ebafc8fb7" containerName="mariadb-account-create-update" Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: I0223 13:27:40.950916 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:40.963108 master-0 kubenswrapper[26474]: I0223 13:27:40.953043 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 13:27:40.980392 master-0 kubenswrapper[26474]: I0223 13:27:40.980314 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea4f-account-create-update-rdkts"] Feb 23 13:27:41.020690 master-0 kubenswrapper[26474]: I0223 13:27:41.010835 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqj4s\" (UniqueName: \"kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s\") pod \"placement-db-create-9kw5h\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:41.029359 master-0 kubenswrapper[26474]: I0223 13:27:41.026779 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrt6\" (UniqueName: \"kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6\") pod \"a664e235-45e6-4261-9aa8-524ebafc8fb7\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " Feb 23 13:27:41.029359 master-0 kubenswrapper[26474]: I0223 13:27:41.027138 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts\") pod \"a664e235-45e6-4261-9aa8-524ebafc8fb7\" (UID: \"a664e235-45e6-4261-9aa8-524ebafc8fb7\") " Feb 23 13:27:41.029359 master-0 kubenswrapper[26474]: I0223 13:27:41.027706 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.029359 master-0 kubenswrapper[26474]: I0223 13:27:41.027833 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:41.029359 master-0 kubenswrapper[26474]: I0223 13:27:41.027866 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmnb\" (UniqueName: \"kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.031033 master-0 kubenswrapper[26474]: I0223 13:27:41.030646 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a664e235-45e6-4261-9aa8-524ebafc8fb7" (UID: "a664e235-45e6-4261-9aa8-524ebafc8fb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:41.031033 master-0 kubenswrapper[26474]: E0223 13:27:41.030795 26474 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:27:41.031033 master-0 kubenswrapper[26474]: E0223 13:27:41.030808 26474 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:27:41.031033 master-0 kubenswrapper[26474]: E0223 13:27:41.030855 26474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift podName:7172c21a-db8e-428a-9a0c-5ef060abafd3 nodeName:}" failed. No retries permitted until 2026-02-23 13:27:49.030835521 +0000 UTC m=+790.877343188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift") pod "swift-storage-0" (UID: "7172c21a-db8e-428a-9a0c-5ef060abafd3") : configmap "swift-ring-files" not found Feb 23 13:27:41.032416 master-0 kubenswrapper[26474]: I0223 13:27:41.032386 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6" (OuterVolumeSpecName: "kube-api-access-smrt6") pod "a664e235-45e6-4261-9aa8-524ebafc8fb7" (UID: "a664e235-45e6-4261-9aa8-524ebafc8fb7"). InnerVolumeSpecName "kube-api-access-smrt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:41.130084 master-0 kubenswrapper[26474]: I0223 13:27:41.129959 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.130084 master-0 kubenswrapper[26474]: I0223 13:27:41.130144 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmnb\" (UniqueName: \"kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.130445 master-0 kubenswrapper[26474]: I0223 13:27:41.130225 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a664e235-45e6-4261-9aa8-524ebafc8fb7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:41.130445 master-0 kubenswrapper[26474]: I0223 13:27:41.130239 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrt6\" (UniqueName: \"kubernetes.io/projected/a664e235-45e6-4261-9aa8-524ebafc8fb7-kube-api-access-smrt6\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:41.131277 master-0 kubenswrapper[26474]: I0223 13:27:41.131247 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.151426 master-0 kubenswrapper[26474]: I0223 13:27:41.151379 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmnb\" (UniqueName: \"kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb\") pod \"placement-ea4f-account-create-update-rdkts\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.276780 master-0 kubenswrapper[26474]: I0223 13:27:41.276726 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:41.291875 master-0 kubenswrapper[26474]: I0223 13:27:41.291792 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:41.317229 master-0 kubenswrapper[26474]: I0223 13:27:41.317164 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b01-account-create-update-57vw2"] Feb 23 13:27:41.430551 master-0 kubenswrapper[26474]: I0223 13:27:41.430469 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a9d-account-create-update-2kcdv"] Feb 23 13:27:41.540589 master-0 kubenswrapper[26474]: I0223 13:27:41.540215 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ffkfh" Feb 23 13:27:41.540589 master-0 kubenswrapper[26474]: I0223 13:27:41.540208 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ffkfh" event={"ID":"a664e235-45e6-4261-9aa8-524ebafc8fb7","Type":"ContainerDied","Data":"05ecb0de8c575141fe4cbb764d2ef56d272177ddd5390d5ab730e9fe38e59cca"} Feb 23 13:27:41.540589 master-0 kubenswrapper[26474]: I0223 13:27:41.540348 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ecb0de8c575141fe4cbb764d2ef56d272177ddd5390d5ab730e9fe38e59cca" Feb 23 13:27:41.543877 master-0 kubenswrapper[26474]: I0223 13:27:41.543661 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6wgzv"] Feb 23 13:27:41.543877 master-0 kubenswrapper[26474]: I0223 13:27:41.543699 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b01-account-create-update-57vw2" event={"ID":"53a010b6-e0ac-42be-aa66-80acd726f647","Type":"ContainerStarted","Data":"e53422eb0f08ee7239c3c68b6ec7492068cad1ad201133d093ab491e99382cfb"} Feb 23 13:27:41.549189 master-0 kubenswrapper[26474]: I0223 13:27:41.549141 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8zvqt" event={"ID":"f760f819-5bdd-4b3b-9374-7bde76377f34","Type":"ContainerStarted","Data":"0a1151e046f24c74b82a6ff58b0f3e0a89f57ca0b59bb6c9efdd42b5f80c85c6"} Feb 23 13:27:41.550194 master-0 kubenswrapper[26474]: W0223 13:27:41.550152 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbec78070_f90d_43c3_b5dc_2b05b570d739.slice/crio-991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2 WatchSource:0}: Error finding container 991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2: Status 404 returned error can't find the container with id 991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2 Feb 23 13:27:41.550913 master-0 kubenswrapper[26474]: I0223 13:27:41.550848 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a9d-account-create-update-2kcdv" event={"ID":"e049806d-aa21-4102-8583-a142a2f80c58","Type":"ContainerStarted","Data":"c51d812da8fa5bd43799e58aaf8cffd0092f95029b763f578d9dc19c0a99b368"} Feb 23 13:27:41.568075 master-0 kubenswrapper[26474]: W0223 13:27:41.567831 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e097639_7f3b_414c_bbdc_a41202715f31.slice/crio-18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269 WatchSource:0}: Error finding container 18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269: Status 404 returned error can't find the container with id 18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269 Feb 23 13:27:41.568075 master-0 kubenswrapper[26474]: I0223 13:27:41.567867 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbjdj"] Feb 23 13:27:41.570161 master-0 kubenswrapper[26474]: I0223 13:27:41.569222 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:27:41.587719 master-0 kubenswrapper[26474]: I0223 13:27:41.587649 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8zvqt" podStartSLOduration=1.705108567 podStartE2EDuration="5.587627989s" podCreationTimestamp="2026-02-23 13:27:36 +0000 UTC" firstStartedPulling="2026-02-23 13:27:37.194125971 +0000 UTC m=+779.040633638" lastFinishedPulling="2026-02-23 13:27:41.076645383 +0000 UTC m=+782.923153060" observedRunningTime="2026-02-23 13:27:41.573181388 +0000 UTC m=+783.419689065" watchObservedRunningTime="2026-02-23 13:27:41.587627989 +0000 UTC m=+783.434135666" Feb 23 13:27:41.710899 master-0 kubenswrapper[26474]: I0223 13:27:41.704326 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:27:41.710899 master-0 kubenswrapper[26474]: I0223 13:27:41.704715 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="dnsmasq-dns" containerID="cri-o://1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae" gracePeriod=10 Feb 23 13:27:41.814508 master-0 kubenswrapper[26474]: I0223 13:27:41.813898 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9kw5h"] Feb 23 13:27:41.865986 master-0 kubenswrapper[26474]: W0223 13:27:41.865926 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eea86bf_8169_406d_bd43_b83c6d53f41f.slice/crio-b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5 WatchSource:0}: Error finding container b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5: Status 404 returned error can't find the container with id b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5 Feb 23 13:27:42.042663 master-0 kubenswrapper[26474]: I0223 13:27:42.042596 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea4f-account-create-update-rdkts"] Feb 23 13:27:42.340912 master-0 kubenswrapper[26474]: I0223 13:27:42.340862 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:27:42.510926 master-0 kubenswrapper[26474]: I0223 13:27:42.507326 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config\") pod \"3078ad03-a115-4907-840e-a5c5057bed71\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " Feb 23 13:27:42.510926 master-0 kubenswrapper[26474]: I0223 13:27:42.507476 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc\") pod \"3078ad03-a115-4907-840e-a5c5057bed71\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " Feb 23 13:27:42.510926 master-0 kubenswrapper[26474]: I0223 13:27:42.507615 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62mqh\" (UniqueName: \"kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh\") pod \"3078ad03-a115-4907-840e-a5c5057bed71\" (UID: \"3078ad03-a115-4907-840e-a5c5057bed71\") " Feb 23 13:27:42.525565 master-0 kubenswrapper[26474]: I0223 13:27:42.518150 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh" (OuterVolumeSpecName: "kube-api-access-62mqh") pod "3078ad03-a115-4907-840e-a5c5057bed71" (UID: "3078ad03-a115-4907-840e-a5c5057bed71"). InnerVolumeSpecName "kube-api-access-62mqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:42.566650 master-0 kubenswrapper[26474]: I0223 13:27:42.566588 26474 generic.go:334] "Generic (PLEG): container finished" podID="53a010b6-e0ac-42be-aa66-80acd726f647" containerID="1e1b78fc7afc6c59c1c73199b3d27f1953027bef7ba43752742a26d0b87ff636" exitCode=0 Feb 23 13:27:42.567179 master-0 kubenswrapper[26474]: I0223 13:27:42.566713 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b01-account-create-update-57vw2" event={"ID":"53a010b6-e0ac-42be-aa66-80acd726f647","Type":"ContainerDied","Data":"1e1b78fc7afc6c59c1c73199b3d27f1953027bef7ba43752742a26d0b87ff636"} Feb 23 13:27:42.569199 master-0 kubenswrapper[26474]: I0223 13:27:42.569162 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" Feb 23 13:27:42.569199 master-0 kubenswrapper[26474]: I0223 13:27:42.569102 26474 generic.go:334] "Generic (PLEG): container finished" podID="3078ad03-a115-4907-840e-a5c5057bed71" containerID="1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae" exitCode=0 Feb 23 13:27:42.569322 master-0 kubenswrapper[26474]: I0223 13:27:42.569274 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" event={"ID":"3078ad03-a115-4907-840e-a5c5057bed71","Type":"ContainerDied","Data":"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae"} Feb 23 13:27:42.569322 master-0 kubenswrapper[26474]: I0223 13:27:42.569304 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-cd2wc" event={"ID":"3078ad03-a115-4907-840e-a5c5057bed71","Type":"ContainerDied","Data":"70e9a550a818928642f7b9790faed54147a53d611f4c6fa571ac954e25b58e3e"} Feb 23 13:27:42.569322 master-0 kubenswrapper[26474]: I0223 13:27:42.569321 26474 scope.go:117] "RemoveContainer" containerID="1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae" Feb 23 13:27:42.571729 master-0 kubenswrapper[26474]: I0223 13:27:42.571659 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9kw5h" event={"ID":"0eea86bf-8169-406d-bd43-b83c6d53f41f","Type":"ContainerStarted","Data":"12a4eab544abef5083adb6123f8f52ec9e162622794db36ce83ed338655072ab"} Feb 23 13:27:42.571729 master-0 kubenswrapper[26474]: I0223 13:27:42.571717 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9kw5h" event={"ID":"0eea86bf-8169-406d-bd43-b83c6d53f41f","Type":"ContainerStarted","Data":"b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5"} Feb 23 13:27:42.573362 master-0 kubenswrapper[26474]: I0223 13:27:42.573142 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea4f-account-create-update-rdkts" event={"ID":"603694e6-de5e-4098-8271-af0f6ae0f5a5","Type":"ContainerStarted","Data":"8f9bed3fe78674372b2096a3e64768c7ae9ee6f2c2ca03cf9ea2571b6506cfbb"} Feb 23 13:27:42.573362 master-0 kubenswrapper[26474]: I0223 13:27:42.573172 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea4f-account-create-update-rdkts" event={"ID":"603694e6-de5e-4098-8271-af0f6ae0f5a5","Type":"ContainerStarted","Data":"d8e584961ff990ebebbbe1b95f0afc9b1e88a69a51d1e6fc7e119e504c56a666"} Feb 23 13:27:42.578069 master-0 kubenswrapper[26474]: I0223 13:27:42.578036 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbjdj" event={"ID":"1e097639-7f3b-414c-bbdc-a41202715f31","Type":"ContainerStarted","Data":"265e29f4d00888469263bbe789f7c7ad8ff43049bdfb6b0d69e56e28030069da"} Feb 23 13:27:42.578069 master-0 kubenswrapper[26474]: I0223 13:27:42.578064 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbjdj" event={"ID":"1e097639-7f3b-414c-bbdc-a41202715f31","Type":"ContainerStarted","Data":"18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269"} Feb 23 13:27:42.580969 master-0 kubenswrapper[26474]: I0223 13:27:42.580931 26474 generic.go:334] "Generic (PLEG): container finished" podID="e049806d-aa21-4102-8583-a142a2f80c58" containerID="ee03b1c0337465a7c0af6335c9c24e04c2082bff1206b33e05dfa5b89adaef3f" exitCode=0 Feb 23 13:27:42.581134 master-0 kubenswrapper[26474]: I0223 13:27:42.581115 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a9d-account-create-update-2kcdv" event={"ID":"e049806d-aa21-4102-8583-a142a2f80c58","Type":"ContainerDied","Data":"ee03b1c0337465a7c0af6335c9c24e04c2082bff1206b33e05dfa5b89adaef3f"} Feb 23 13:27:42.591751 master-0 kubenswrapper[26474]: I0223 13:27:42.591680 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3078ad03-a115-4907-840e-a5c5057bed71" (UID: "3078ad03-a115-4907-840e-a5c5057bed71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:42.601354 master-0 kubenswrapper[26474]: I0223 13:27:42.594642 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config" (OuterVolumeSpecName: "config") pod "3078ad03-a115-4907-840e-a5c5057bed71" (UID: "3078ad03-a115-4907-840e-a5c5057bed71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:42.603739 master-0 kubenswrapper[26474]: I0223 13:27:42.603698 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wgzv" event={"ID":"bec78070-f90d-43c3-b5dc-2b05b570d739","Type":"ContainerStarted","Data":"e001bd9a45325901f341110aa8fb045be2e1b672111501378c7ece19cc056ec0"} Feb 23 13:27:42.607500 master-0 kubenswrapper[26474]: I0223 13:27:42.603756 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wgzv" event={"ID":"bec78070-f90d-43c3-b5dc-2b05b570d739","Type":"ContainerStarted","Data":"991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2"} Feb 23 13:27:42.609696 master-0 kubenswrapper[26474]: I0223 13:27:42.609582 26474 scope.go:117] "RemoveContainer" containerID="f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1" Feb 23 13:27:42.614091 master-0 kubenswrapper[26474]: I0223 13:27:42.614034 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:42.614365 master-0 kubenswrapper[26474]: I0223 13:27:42.614317 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3078ad03-a115-4907-840e-a5c5057bed71-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:42.614438 master-0 kubenswrapper[26474]: I0223 13:27:42.614366 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62mqh\" (UniqueName: \"kubernetes.io/projected/3078ad03-a115-4907-840e-a5c5057bed71-kube-api-access-62mqh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:42.618868 master-0 kubenswrapper[26474]: I0223 13:27:42.618778 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-nbjdj" podStartSLOduration=2.618755363 podStartE2EDuration="2.618755363s" podCreationTimestamp="2026-02-23 13:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:42.604096797 +0000 UTC m=+784.450604474" watchObservedRunningTime="2026-02-23 13:27:42.618755363 +0000 UTC m=+784.465263050" Feb 23 13:27:42.630439 master-0 kubenswrapper[26474]: I0223 13:27:42.629411 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9kw5h" podStartSLOduration=2.629392161 podStartE2EDuration="2.629392161s" podCreationTimestamp="2026-02-23 13:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:42.622914544 +0000 UTC m=+784.469422231" watchObservedRunningTime="2026-02-23 13:27:42.629392161 +0000 UTC m=+784.475899838" Feb 23 13:27:42.655685 master-0 kubenswrapper[26474]: I0223 13:27:42.655533 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ea4f-account-create-update-rdkts" podStartSLOduration=2.655513296 podStartE2EDuration="2.655513296s" podCreationTimestamp="2026-02-23 13:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:42.637538449 +0000 UTC m=+784.484046146" watchObservedRunningTime="2026-02-23 13:27:42.655513296 +0000 UTC m=+784.502020973" Feb 23 13:27:42.663071 master-0 kubenswrapper[26474]: I0223 13:27:42.662997 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-6wgzv" podStartSLOduration=3.662986637 podStartE2EDuration="3.662986637s" podCreationTimestamp="2026-02-23 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:42.658882247 +0000 UTC m=+784.505389924" watchObservedRunningTime="2026-02-23 13:27:42.662986637 +0000 UTC m=+784.509494304" Feb 23 13:27:42.874333 master-0 kubenswrapper[26474]: I0223 13:27:42.874269 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ffkfh"] Feb 23 13:27:42.884451 master-0 kubenswrapper[26474]: I0223 13:27:42.884383 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ffkfh"] Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: I0223 13:27:42.936383 26474 scope.go:117] "RemoveContainer" containerID="1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae" Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: E0223 13:27:42.936862 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae\": container with ID starting with 1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae not found: ID does not exist" containerID="1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae" Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: I0223 13:27:42.936943 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae"} err="failed to get container status \"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae\": rpc error: code = NotFound desc = could not find container \"1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae\": container with ID starting with 1078347087b3291c90252cbd3e13c131458a86d20f95fbf6e09ea5ce117c87ae not found: ID does not exist" Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: I0223 13:27:42.936973 26474 scope.go:117] "RemoveContainer" containerID="f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1" Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: E0223 13:27:42.937292 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1\": container with ID starting with f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1 not found: ID does not exist" containerID="f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1" Feb 23 13:27:42.937854 master-0 kubenswrapper[26474]: I0223 13:27:42.937323 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1"} err="failed to get container status \"f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1\": rpc error: code = NotFound desc = could not find container \"f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1\": container with ID starting with f183d9de8799dc21e5cf4881463e1d20423de35f4217e0dad4bf9f4bdc026fc1 not found: ID does not exist" Feb 23 13:27:43.046358 master-0 kubenswrapper[26474]: I0223 13:27:43.046220 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:27:43.071894 master-0 kubenswrapper[26474]: I0223 13:27:43.071801 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-cd2wc"] Feb 23 13:27:43.622633 master-0 kubenswrapper[26474]: I0223 13:27:43.622551 26474 generic.go:334] "Generic (PLEG): container finished" podID="0eea86bf-8169-406d-bd43-b83c6d53f41f" containerID="12a4eab544abef5083adb6123f8f52ec9e162622794db36ce83ed338655072ab" exitCode=0 Feb 23 13:27:43.623288 master-0 kubenswrapper[26474]: I0223 13:27:43.622666 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9kw5h" event={"ID":"0eea86bf-8169-406d-bd43-b83c6d53f41f","Type":"ContainerDied","Data":"12a4eab544abef5083adb6123f8f52ec9e162622794db36ce83ed338655072ab"} Feb 23 13:27:43.625583 master-0 kubenswrapper[26474]: I0223 13:27:43.625509 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea4f-account-create-update-rdkts" event={"ID":"603694e6-de5e-4098-8271-af0f6ae0f5a5","Type":"ContainerDied","Data":"8f9bed3fe78674372b2096a3e64768c7ae9ee6f2c2ca03cf9ea2571b6506cfbb"} Feb 23 13:27:43.627002 master-0 kubenswrapper[26474]: I0223 13:27:43.625399 26474 generic.go:334] "Generic (PLEG): container finished" podID="603694e6-de5e-4098-8271-af0f6ae0f5a5" containerID="8f9bed3fe78674372b2096a3e64768c7ae9ee6f2c2ca03cf9ea2571b6506cfbb" exitCode=0 Feb 23 13:27:43.629127 master-0 kubenswrapper[26474]: I0223 13:27:43.629077 26474 generic.go:334] "Generic (PLEG): container finished" podID="1e097639-7f3b-414c-bbdc-a41202715f31" containerID="265e29f4d00888469263bbe789f7c7ad8ff43049bdfb6b0d69e56e28030069da" exitCode=0 Feb 23 13:27:43.629223 master-0 kubenswrapper[26474]: I0223 13:27:43.629126 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbjdj" event={"ID":"1e097639-7f3b-414c-bbdc-a41202715f31","Type":"ContainerDied","Data":"265e29f4d00888469263bbe789f7c7ad8ff43049bdfb6b0d69e56e28030069da"} Feb 23 13:27:43.632187 master-0 kubenswrapper[26474]: I0223 13:27:43.632152 26474 generic.go:334] "Generic (PLEG): container finished" podID="bec78070-f90d-43c3-b5dc-2b05b570d739" containerID="e001bd9a45325901f341110aa8fb045be2e1b672111501378c7ece19cc056ec0" exitCode=0 Feb 23 13:27:43.632271 master-0 kubenswrapper[26474]: I0223 13:27:43.632223 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wgzv" event={"ID":"bec78070-f90d-43c3-b5dc-2b05b570d739","Type":"ContainerDied","Data":"e001bd9a45325901f341110aa8fb045be2e1b672111501378c7ece19cc056ec0"} Feb 23 13:27:44.216215 master-0 kubenswrapper[26474]: I0223 13:27:44.216078 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:44.223021 master-0 kubenswrapper[26474]: I0223 13:27:44.222991 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:44.360289 master-0 kubenswrapper[26474]: I0223 13:27:44.360201 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts\") pod \"e049806d-aa21-4102-8583-a142a2f80c58\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " Feb 23 13:27:44.360289 master-0 kubenswrapper[26474]: I0223 13:27:44.360297 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts\") pod \"53a010b6-e0ac-42be-aa66-80acd726f647\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " Feb 23 13:27:44.360595 master-0 kubenswrapper[26474]: I0223 13:27:44.360328 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw999\" (UniqueName: \"kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999\") pod \"53a010b6-e0ac-42be-aa66-80acd726f647\" (UID: \"53a010b6-e0ac-42be-aa66-80acd726f647\") " Feb 23 13:27:44.360595 master-0 kubenswrapper[26474]: I0223 13:27:44.360561 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np9v2\" (UniqueName: \"kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2\") pod \"e049806d-aa21-4102-8583-a142a2f80c58\" (UID: \"e049806d-aa21-4102-8583-a142a2f80c58\") " Feb 23 13:27:44.361027 master-0 kubenswrapper[26474]: I0223 13:27:44.360960 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e049806d-aa21-4102-8583-a142a2f80c58" (UID: "e049806d-aa21-4102-8583-a142a2f80c58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:44.361631 master-0 kubenswrapper[26474]: I0223 13:27:44.361591 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53a010b6-e0ac-42be-aa66-80acd726f647" (UID: "53a010b6-e0ac-42be-aa66-80acd726f647"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:44.368624 master-0 kubenswrapper[26474]: I0223 13:27:44.368560 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999" (OuterVolumeSpecName: "kube-api-access-tw999") pod "53a010b6-e0ac-42be-aa66-80acd726f647" (UID: "53a010b6-e0ac-42be-aa66-80acd726f647"). InnerVolumeSpecName "kube-api-access-tw999". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:44.368728 master-0 kubenswrapper[26474]: I0223 13:27:44.368695 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2" (OuterVolumeSpecName: "kube-api-access-np9v2") pod "e049806d-aa21-4102-8583-a142a2f80c58" (UID: "e049806d-aa21-4102-8583-a142a2f80c58"). InnerVolumeSpecName "kube-api-access-np9v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:44.414523 master-0 kubenswrapper[26474]: I0223 13:27:44.414435 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3078ad03-a115-4907-840e-a5c5057bed71" path="/var/lib/kubelet/pods/3078ad03-a115-4907-840e-a5c5057bed71/volumes" Feb 23 13:27:44.415284 master-0 kubenswrapper[26474]: I0223 13:27:44.415238 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a664e235-45e6-4261-9aa8-524ebafc8fb7" path="/var/lib/kubelet/pods/a664e235-45e6-4261-9aa8-524ebafc8fb7/volumes" Feb 23 13:27:44.463396 master-0 kubenswrapper[26474]: I0223 13:27:44.463317 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw999\" (UniqueName: \"kubernetes.io/projected/53a010b6-e0ac-42be-aa66-80acd726f647-kube-api-access-tw999\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:44.463396 master-0 kubenswrapper[26474]: I0223 13:27:44.463371 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np9v2\" (UniqueName: \"kubernetes.io/projected/e049806d-aa21-4102-8583-a142a2f80c58-kube-api-access-np9v2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:44.463396 master-0 kubenswrapper[26474]: I0223 13:27:44.463382 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e049806d-aa21-4102-8583-a142a2f80c58-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:44.463396 master-0 kubenswrapper[26474]: I0223 13:27:44.463392 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53a010b6-e0ac-42be-aa66-80acd726f647-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:44.645451 master-0 kubenswrapper[26474]: I0223 13:27:44.645366 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a9d-account-create-update-2kcdv" event={"ID":"e049806d-aa21-4102-8583-a142a2f80c58","Type":"ContainerDied","Data":"c51d812da8fa5bd43799e58aaf8cffd0092f95029b763f578d9dc19c0a99b368"} Feb 23 13:27:44.645451 master-0 kubenswrapper[26474]: I0223 13:27:44.645427 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c51d812da8fa5bd43799e58aaf8cffd0092f95029b763f578d9dc19c0a99b368" Feb 23 13:27:44.645451 master-0 kubenswrapper[26474]: I0223 13:27:44.645385 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a9d-account-create-update-2kcdv" Feb 23 13:27:44.647672 master-0 kubenswrapper[26474]: I0223 13:27:44.647610 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b01-account-create-update-57vw2" event={"ID":"53a010b6-e0ac-42be-aa66-80acd726f647","Type":"ContainerDied","Data":"e53422eb0f08ee7239c3c68b6ec7492068cad1ad201133d093ab491e99382cfb"} Feb 23 13:27:44.647811 master-0 kubenswrapper[26474]: I0223 13:27:44.647737 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53422eb0f08ee7239c3c68b6ec7492068cad1ad201133d093ab491e99382cfb" Feb 23 13:27:44.647887 master-0 kubenswrapper[26474]: I0223 13:27:44.647789 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b01-account-create-update-57vw2" Feb 23 13:27:45.220580 master-0 kubenswrapper[26474]: I0223 13:27:45.220514 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:45.287465 master-0 kubenswrapper[26474]: I0223 13:27:45.282229 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsmnb\" (UniqueName: \"kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb\") pod \"603694e6-de5e-4098-8271-af0f6ae0f5a5\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " Feb 23 13:27:45.287465 master-0 kubenswrapper[26474]: I0223 13:27:45.282755 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts\") pod \"603694e6-de5e-4098-8271-af0f6ae0f5a5\" (UID: \"603694e6-de5e-4098-8271-af0f6ae0f5a5\") " Feb 23 13:27:45.287465 master-0 kubenswrapper[26474]: I0223 13:27:45.284067 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603694e6-de5e-4098-8271-af0f6ae0f5a5" (UID: "603694e6-de5e-4098-8271-af0f6ae0f5a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:45.294117 master-0 kubenswrapper[26474]: I0223 13:27:45.289633 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb" (OuterVolumeSpecName: "kube-api-access-wsmnb") pod "603694e6-de5e-4098-8271-af0f6ae0f5a5" (UID: "603694e6-de5e-4098-8271-af0f6ae0f5a5"). InnerVolumeSpecName "kube-api-access-wsmnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:45.386331 master-0 kubenswrapper[26474]: I0223 13:27:45.386262 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsmnb\" (UniqueName: \"kubernetes.io/projected/603694e6-de5e-4098-8271-af0f6ae0f5a5-kube-api-access-wsmnb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.386331 master-0 kubenswrapper[26474]: I0223 13:27:45.386329 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603694e6-de5e-4098-8271-af0f6ae0f5a5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.457027 master-0 kubenswrapper[26474]: I0223 13:27:45.456923 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:45.462409 master-0 kubenswrapper[26474]: I0223 13:27:45.462306 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:45.468614 master-0 kubenswrapper[26474]: I0223 13:27:45.468566 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:45.589813 master-0 kubenswrapper[26474]: I0223 13:27:45.589666 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rscc2\" (UniqueName: \"kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2\") pod \"1e097639-7f3b-414c-bbdc-a41202715f31\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " Feb 23 13:27:45.589813 master-0 kubenswrapper[26474]: I0223 13:27:45.589802 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts\") pod \"1e097639-7f3b-414c-bbdc-a41202715f31\" (UID: \"1e097639-7f3b-414c-bbdc-a41202715f31\") " Feb 23 13:27:45.590040 master-0 kubenswrapper[26474]: I0223 13:27:45.589942 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts\") pod \"bec78070-f90d-43c3-b5dc-2b05b570d739\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " Feb 23 13:27:45.590410 master-0 kubenswrapper[26474]: I0223 13:27:45.590374 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqj4s\" (UniqueName: \"kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s\") pod \"0eea86bf-8169-406d-bd43-b83c6d53f41f\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " Feb 23 13:27:45.590496 master-0 kubenswrapper[26474]: I0223 13:27:45.590468 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts\") pod \"0eea86bf-8169-406d-bd43-b83c6d53f41f\" (UID: \"0eea86bf-8169-406d-bd43-b83c6d53f41f\") " Feb 23 13:27:45.590539 master-0 kubenswrapper[26474]: I0223 13:27:45.590371 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e097639-7f3b-414c-bbdc-a41202715f31" (UID: "1e097639-7f3b-414c-bbdc-a41202715f31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:45.590575 master-0 kubenswrapper[26474]: I0223 13:27:45.590478 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec78070-f90d-43c3-b5dc-2b05b570d739" (UID: "bec78070-f90d-43c3-b5dc-2b05b570d739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:45.590608 master-0 kubenswrapper[26474]: I0223 13:27:45.590567 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsrhz\" (UniqueName: \"kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz\") pod \"bec78070-f90d-43c3-b5dc-2b05b570d739\" (UID: \"bec78070-f90d-43c3-b5dc-2b05b570d739\") " Feb 23 13:27:45.591106 master-0 kubenswrapper[26474]: I0223 13:27:45.591060 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eea86bf-8169-406d-bd43-b83c6d53f41f" (UID: "0eea86bf-8169-406d-bd43-b83c6d53f41f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:45.591450 master-0 kubenswrapper[26474]: I0223 13:27:45.591415 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e097639-7f3b-414c-bbdc-a41202715f31-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.591510 master-0 kubenswrapper[26474]: I0223 13:27:45.591458 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec78070-f90d-43c3-b5dc-2b05b570d739-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.591510 master-0 kubenswrapper[26474]: I0223 13:27:45.591476 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eea86bf-8169-406d-bd43-b83c6d53f41f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.593273 master-0 kubenswrapper[26474]: I0223 13:27:45.593252 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2" (OuterVolumeSpecName: "kube-api-access-rscc2") pod "1e097639-7f3b-414c-bbdc-a41202715f31" (UID: "1e097639-7f3b-414c-bbdc-a41202715f31"). InnerVolumeSpecName "kube-api-access-rscc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:45.593753 master-0 kubenswrapper[26474]: I0223 13:27:45.593723 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz" (OuterVolumeSpecName: "kube-api-access-nsrhz") pod "bec78070-f90d-43c3-b5dc-2b05b570d739" (UID: "bec78070-f90d-43c3-b5dc-2b05b570d739"). InnerVolumeSpecName "kube-api-access-nsrhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:45.595272 master-0 kubenswrapper[26474]: I0223 13:27:45.595243 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s" (OuterVolumeSpecName: "kube-api-access-cqj4s") pod "0eea86bf-8169-406d-bd43-b83c6d53f41f" (UID: "0eea86bf-8169-406d-bd43-b83c6d53f41f"). InnerVolumeSpecName "kube-api-access-cqj4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:45.661942 master-0 kubenswrapper[26474]: I0223 13:27:45.661898 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wgzv" event={"ID":"bec78070-f90d-43c3-b5dc-2b05b570d739","Type":"ContainerDied","Data":"991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2"} Feb 23 13:27:45.662504 master-0 kubenswrapper[26474]: I0223 13:27:45.662488 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991960e800dbaf3e7a694950a7ecd6d658d3e98afb6284afca7bf8aab3783da2" Feb 23 13:27:45.662585 master-0 kubenswrapper[26474]: I0223 13:27:45.661940 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wgzv" Feb 23 13:27:45.664075 master-0 kubenswrapper[26474]: I0223 13:27:45.664025 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9kw5h" event={"ID":"0eea86bf-8169-406d-bd43-b83c6d53f41f","Type":"ContainerDied","Data":"b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5"} Feb 23 13:27:45.664139 master-0 kubenswrapper[26474]: I0223 13:27:45.664080 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b837de2ecbf8760806b809a29740d13319eee09dbb3e938a5bdd2c8eb53a5da5" Feb 23 13:27:45.664175 master-0 kubenswrapper[26474]: I0223 13:27:45.664145 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9kw5h" Feb 23 13:27:45.666182 master-0 kubenswrapper[26474]: I0223 13:27:45.666155 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea4f-account-create-update-rdkts" event={"ID":"603694e6-de5e-4098-8271-af0f6ae0f5a5","Type":"ContainerDied","Data":"d8e584961ff990ebebbbe1b95f0afc9b1e88a69a51d1e6fc7e119e504c56a666"} Feb 23 13:27:45.666182 master-0 kubenswrapper[26474]: I0223 13:27:45.666169 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea4f-account-create-update-rdkts" Feb 23 13:27:45.666294 master-0 kubenswrapper[26474]: I0223 13:27:45.666183 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e584961ff990ebebbbe1b95f0afc9b1e88a69a51d1e6fc7e119e504c56a666" Feb 23 13:27:45.668226 master-0 kubenswrapper[26474]: I0223 13:27:45.668199 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbjdj" event={"ID":"1e097639-7f3b-414c-bbdc-a41202715f31","Type":"ContainerDied","Data":"18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269"} Feb 23 13:27:45.668288 master-0 kubenswrapper[26474]: I0223 13:27:45.668226 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18679c3ebaf777151402ae12526e4b08fcdcfd54f9c7a28f61c65196f23db269" Feb 23 13:27:45.668288 master-0 kubenswrapper[26474]: I0223 13:27:45.668268 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbjdj" Feb 23 13:27:45.693798 master-0 kubenswrapper[26474]: I0223 13:27:45.693743 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqj4s\" (UniqueName: \"kubernetes.io/projected/0eea86bf-8169-406d-bd43-b83c6d53f41f-kube-api-access-cqj4s\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.693798 master-0 kubenswrapper[26474]: I0223 13:27:45.693786 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsrhz\" (UniqueName: \"kubernetes.io/projected/bec78070-f90d-43c3-b5dc-2b05b570d739-kube-api-access-nsrhz\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:45.693798 master-0 kubenswrapper[26474]: I0223 13:27:45.693797 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rscc2\" (UniqueName: \"kubernetes.io/projected/1e097639-7f3b-414c-bbdc-a41202715f31-kube-api-access-rscc2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:46.221317 master-0 kubenswrapper[26474]: I0223 13:27:46.221238 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rlgt9"] Feb 23 13:27:46.221878 master-0 kubenswrapper[26474]: E0223 13:27:46.221846 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e049806d-aa21-4102-8583-a142a2f80c58" containerName="mariadb-account-create-update" Feb 23 13:27:46.221878 master-0 kubenswrapper[26474]: I0223 13:27:46.221869 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="e049806d-aa21-4102-8583-a142a2f80c58" containerName="mariadb-account-create-update" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: E0223 13:27:46.221900 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eea86bf-8169-406d-bd43-b83c6d53f41f" containerName="mariadb-database-create" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: I0223 13:27:46.221909 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eea86bf-8169-406d-bd43-b83c6d53f41f" containerName="mariadb-database-create" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: E0223 13:27:46.221939 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="dnsmasq-dns" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: I0223 13:27:46.221951 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="dnsmasq-dns" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: E0223 13:27:46.221977 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec78070-f90d-43c3-b5dc-2b05b570d739" containerName="mariadb-database-create" Feb 23 13:27:46.222005 master-0 kubenswrapper[26474]: I0223 13:27:46.221988 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec78070-f90d-43c3-b5dc-2b05b570d739" containerName="mariadb-database-create" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: E0223 13:27:46.222020 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e097639-7f3b-414c-bbdc-a41202715f31" containerName="mariadb-database-create" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: I0223 13:27:46.222031 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e097639-7f3b-414c-bbdc-a41202715f31" containerName="mariadb-database-create" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: E0223 13:27:46.222054 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a010b6-e0ac-42be-aa66-80acd726f647" containerName="mariadb-account-create-update" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: I0223 13:27:46.222066 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a010b6-e0ac-42be-aa66-80acd726f647" containerName="mariadb-account-create-update" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: E0223 13:27:46.222105 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="init" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: I0223 13:27:46.222117 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="init" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: E0223 13:27:46.222153 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603694e6-de5e-4098-8271-af0f6ae0f5a5" containerName="mariadb-account-create-update" Feb 23 13:27:46.222308 master-0 kubenswrapper[26474]: I0223 13:27:46.222165 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="603694e6-de5e-4098-8271-af0f6ae0f5a5" containerName="mariadb-account-create-update" Feb 23 13:27:46.222674 master-0 kubenswrapper[26474]: I0223 13:27:46.222587 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="e049806d-aa21-4102-8583-a142a2f80c58" containerName="mariadb-account-create-update" Feb 23 13:27:46.222674 master-0 kubenswrapper[26474]: I0223 13:27:46.222617 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="3078ad03-a115-4907-840e-a5c5057bed71" containerName="dnsmasq-dns" Feb 23 13:27:46.222674 master-0 kubenswrapper[26474]: I0223 13:27:46.222634 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec78070-f90d-43c3-b5dc-2b05b570d739" containerName="mariadb-database-create" Feb 23 13:27:46.222674 master-0 kubenswrapper[26474]: I0223 13:27:46.222653 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eea86bf-8169-406d-bd43-b83c6d53f41f" containerName="mariadb-database-create" Feb 23 13:27:46.222833 master-0 kubenswrapper[26474]: I0223 13:27:46.222684 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="603694e6-de5e-4098-8271-af0f6ae0f5a5" containerName="mariadb-account-create-update" Feb 23 13:27:46.222833 master-0 kubenswrapper[26474]: I0223 13:27:46.222709 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e097639-7f3b-414c-bbdc-a41202715f31" containerName="mariadb-database-create" Feb 23 13:27:46.222833 master-0 kubenswrapper[26474]: I0223 13:27:46.222730 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a010b6-e0ac-42be-aa66-80acd726f647" containerName="mariadb-account-create-update" Feb 23 13:27:46.223734 master-0 kubenswrapper[26474]: I0223 13:27:46.223701 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.228396 master-0 kubenswrapper[26474]: I0223 13:27:46.228320 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 13:27:46.240388 master-0 kubenswrapper[26474]: I0223 13:27:46.240305 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rlgt9"] Feb 23 13:27:46.309502 master-0 kubenswrapper[26474]: I0223 13:27:46.309437 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.309778 master-0 kubenswrapper[26474]: I0223 13:27:46.309728 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6p2\" (UniqueName: \"kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.411729 master-0 kubenswrapper[26474]: I0223 13:27:46.411672 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6p2\" (UniqueName: \"kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.412004 master-0 kubenswrapper[26474]: I0223 13:27:46.411798 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.412493 master-0 kubenswrapper[26474]: I0223 13:27:46.412465 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.433784 master-0 kubenswrapper[26474]: I0223 13:27:46.433541 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6p2\" (UniqueName: \"kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2\") pod \"root-account-create-update-rlgt9\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:46.560941 master-0 kubenswrapper[26474]: I0223 13:27:46.560865 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:47.013787 master-0 kubenswrapper[26474]: I0223 13:27:47.013704 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rlgt9"] Feb 23 13:27:47.692827 master-0 kubenswrapper[26474]: I0223 13:27:47.692723 26474 generic.go:334] "Generic (PLEG): container finished" podID="68992f03-040b-4bfd-88bf-38622d28c78c" containerID="b48c02425a759e3b577ebfca42c7552b279e1fa35569f1c8173cec14f819bc4a" exitCode=0 Feb 23 13:27:47.692827 master-0 kubenswrapper[26474]: I0223 13:27:47.692811 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rlgt9" event={"ID":"68992f03-040b-4bfd-88bf-38622d28c78c","Type":"ContainerDied","Data":"b48c02425a759e3b577ebfca42c7552b279e1fa35569f1c8173cec14f819bc4a"} Feb 23 13:27:47.694646 master-0 kubenswrapper[26474]: I0223 13:27:47.692854 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rlgt9" event={"ID":"68992f03-040b-4bfd-88bf-38622d28c78c","Type":"ContainerStarted","Data":"9e241f8d9b96710b6282ab8e8328f5387335a287fdaa18139a068995abd9d221"} Feb 23 13:27:48.716483 master-0 kubenswrapper[26474]: I0223 13:27:48.716180 26474 generic.go:334] "Generic (PLEG): container finished" podID="f760f819-5bdd-4b3b-9374-7bde76377f34" containerID="0a1151e046f24c74b82a6ff58b0f3e0a89f57ca0b59bb6c9efdd42b5f80c85c6" exitCode=0 Feb 23 13:27:48.716483 master-0 kubenswrapper[26474]: I0223 13:27:48.716294 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8zvqt" event={"ID":"f760f819-5bdd-4b3b-9374-7bde76377f34","Type":"ContainerDied","Data":"0a1151e046f24c74b82a6ff58b0f3e0a89f57ca0b59bb6c9efdd42b5f80c85c6"} Feb 23 13:27:48.926167 master-0 kubenswrapper[26474]: I0223 13:27:48.925987 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 13:27:49.113951 master-0 kubenswrapper[26474]: I0223 13:27:49.113873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:49.120021 master-0 kubenswrapper[26474]: I0223 13:27:49.119961 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7172c21a-db8e-428a-9a0c-5ef060abafd3-etc-swift\") pod \"swift-storage-0\" (UID: \"7172c21a-db8e-428a-9a0c-5ef060abafd3\") " pod="openstack/swift-storage-0" Feb 23 13:27:49.183838 master-0 kubenswrapper[26474]: I0223 13:27:49.183778 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 13:27:49.252690 master-0 kubenswrapper[26474]: I0223 13:27:49.252641 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:49.421772 master-0 kubenswrapper[26474]: I0223 13:27:49.421635 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j6p2\" (UniqueName: \"kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2\") pod \"68992f03-040b-4bfd-88bf-38622d28c78c\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " Feb 23 13:27:49.422048 master-0 kubenswrapper[26474]: I0223 13:27:49.422027 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts\") pod \"68992f03-040b-4bfd-88bf-38622d28c78c\" (UID: \"68992f03-040b-4bfd-88bf-38622d28c78c\") " Feb 23 13:27:49.422905 master-0 kubenswrapper[26474]: I0223 13:27:49.422859 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68992f03-040b-4bfd-88bf-38622d28c78c" (UID: "68992f03-040b-4bfd-88bf-38622d28c78c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:49.427080 master-0 kubenswrapper[26474]: I0223 13:27:49.427021 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2" (OuterVolumeSpecName: "kube-api-access-8j6p2") pod "68992f03-040b-4bfd-88bf-38622d28c78c" (UID: "68992f03-040b-4bfd-88bf-38622d28c78c"). InnerVolumeSpecName "kube-api-access-8j6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:49.525395 master-0 kubenswrapper[26474]: I0223 13:27:49.525311 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j6p2\" (UniqueName: \"kubernetes.io/projected/68992f03-040b-4bfd-88bf-38622d28c78c-kube-api-access-8j6p2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:49.525395 master-0 kubenswrapper[26474]: I0223 13:27:49.525380 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68992f03-040b-4bfd-88bf-38622d28c78c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:49.659053 master-0 kubenswrapper[26474]: I0223 13:27:49.658991 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:27:49.669777 master-0 kubenswrapper[26474]: W0223 13:27:49.669712 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7172c21a_db8e_428a_9a0c_5ef060abafd3.slice/crio-69e8bc8a99615cb4f7aac168f9875ff8789a1b8b504caf26b8dff3d7e0305543 WatchSource:0}: Error finding container 69e8bc8a99615cb4f7aac168f9875ff8789a1b8b504caf26b8dff3d7e0305543: Status 404 returned error can't find the container with id 69e8bc8a99615cb4f7aac168f9875ff8789a1b8b504caf26b8dff3d7e0305543 Feb 23 13:27:49.732693 master-0 kubenswrapper[26474]: I0223 13:27:49.732636 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"69e8bc8a99615cb4f7aac168f9875ff8789a1b8b504caf26b8dff3d7e0305543"} Feb 23 13:27:49.743677 master-0 kubenswrapper[26474]: I0223 13:27:49.743605 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rlgt9" Feb 23 13:27:49.747505 master-0 kubenswrapper[26474]: I0223 13:27:49.747445 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rlgt9" event={"ID":"68992f03-040b-4bfd-88bf-38622d28c78c","Type":"ContainerDied","Data":"9e241f8d9b96710b6282ab8e8328f5387335a287fdaa18139a068995abd9d221"} Feb 23 13:27:49.747645 master-0 kubenswrapper[26474]: I0223 13:27:49.747622 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e241f8d9b96710b6282ab8e8328f5387335a287fdaa18139a068995abd9d221" Feb 23 13:27:49.755446 master-0 kubenswrapper[26474]: I0223 13:27:49.750465 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tcp42"] Feb 23 13:27:49.755446 master-0 kubenswrapper[26474]: E0223 13:27:49.751198 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68992f03-040b-4bfd-88bf-38622d28c78c" containerName="mariadb-account-create-update" Feb 23 13:27:49.755446 master-0 kubenswrapper[26474]: I0223 13:27:49.751237 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="68992f03-040b-4bfd-88bf-38622d28c78c" containerName="mariadb-account-create-update" Feb 23 13:27:49.755446 master-0 kubenswrapper[26474]: I0223 13:27:49.751576 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="68992f03-040b-4bfd-88bf-38622d28c78c" containerName="mariadb-account-create-update" Feb 23 13:27:49.755446 master-0 kubenswrapper[26474]: I0223 13:27:49.754900 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.763689 master-0 kubenswrapper[26474]: I0223 13:27:49.763651 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-config-data" Feb 23 13:27:49.767598 master-0 kubenswrapper[26474]: I0223 13:27:49.767530 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tcp42"] Feb 23 13:27:49.830962 master-0 kubenswrapper[26474]: I0223 13:27:49.830911 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.831265 master-0 kubenswrapper[26474]: I0223 13:27:49.831248 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.831460 master-0 kubenswrapper[26474]: I0223 13:27:49.831428 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.831516 master-0 kubenswrapper[26474]: I0223 13:27:49.831481 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvjk\" (UniqueName: \"kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.936441 master-0 kubenswrapper[26474]: I0223 13:27:49.933577 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.936441 master-0 kubenswrapper[26474]: I0223 13:27:49.933691 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.936441 master-0 kubenswrapper[26474]: I0223 13:27:49.933753 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.936441 master-0 kubenswrapper[26474]: I0223 13:27:49.933784 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvjk\" (UniqueName: \"kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.940361 master-0 kubenswrapper[26474]: I0223 13:27:49.940125 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.941013 master-0 kubenswrapper[26474]: I0223 13:27:49.940989 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.945461 master-0 kubenswrapper[26474]: I0223 13:27:49.943143 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:49.983365 master-0 kubenswrapper[26474]: I0223 13:27:49.974589 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvjk\" (UniqueName: \"kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk\") pod \"glance-db-sync-tcp42\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:50.098825 master-0 kubenswrapper[26474]: I0223 13:27:50.098763 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tcp42" Feb 23 13:27:50.266680 master-0 kubenswrapper[26474]: I0223 13:27:50.266623 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.350980 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351100 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351138 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351175 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr4hd\" (UniqueName: \"kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351201 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351249 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.351418 master-0 kubenswrapper[26474]: I0223 13:27:50.351285 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift\") pod \"f760f819-5bdd-4b3b-9374-7bde76377f34\" (UID: \"f760f819-5bdd-4b3b-9374-7bde76377f34\") " Feb 23 13:27:50.352589 master-0 kubenswrapper[26474]: I0223 13:27:50.352557 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:50.353991 master-0 kubenswrapper[26474]: I0223 13:27:50.353940 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:50.356160 master-0 kubenswrapper[26474]: I0223 13:27:50.356103 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd" (OuterVolumeSpecName: "kube-api-access-fr4hd") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "kube-api-access-fr4hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:50.360287 master-0 kubenswrapper[26474]: I0223 13:27:50.360210 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:50.389972 master-0 kubenswrapper[26474]: I0223 13:27:50.389907 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:50.391819 master-0 kubenswrapper[26474]: I0223 13:27:50.391739 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts" (OuterVolumeSpecName: "scripts") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:50.392909 master-0 kubenswrapper[26474]: I0223 13:27:50.392860 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f760f819-5bdd-4b3b-9374-7bde76377f34" (UID: "f760f819-5bdd-4b3b-9374-7bde76377f34"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454059 26474 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f760f819-5bdd-4b3b-9374-7bde76377f34-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454135 26474 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454147 26474 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454158 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f760f819-5bdd-4b3b-9374-7bde76377f34-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454171 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr4hd\" (UniqueName: \"kubernetes.io/projected/f760f819-5bdd-4b3b-9374-7bde76377f34-kube-api-access-fr4hd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454180 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.454197 master-0 kubenswrapper[26474]: I0223 13:27:50.454213 26474 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f760f819-5bdd-4b3b-9374-7bde76377f34-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 23 13:27:50.663889 master-0 kubenswrapper[26474]: E0223 13:27:50.663808 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf760f819_5bdd_4b3b_9374_7bde76377f34.slice/crio-98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf760f819_5bdd_4b3b_9374_7bde76377f34.slice\": RecentStats: unable to find data in memory cache]" Feb 23 13:27:50.676238 master-0 kubenswrapper[26474]: I0223 13:27:50.676185 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tcp42"] Feb 23 13:27:50.766763 master-0 kubenswrapper[26474]: I0223 13:27:50.762745 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8zvqt" event={"ID":"f760f819-5bdd-4b3b-9374-7bde76377f34","Type":"ContainerDied","Data":"98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427"} Feb 23 13:27:50.766763 master-0 kubenswrapper[26474]: I0223 13:27:50.762798 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98055475ad0c97a5dcf1312001f9964241bea8ba9b10cead3e358b0da165a427" Feb 23 13:27:50.766763 master-0 kubenswrapper[26474]: I0223 13:27:50.762795 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8zvqt" Feb 23 13:27:51.419527 master-0 kubenswrapper[26474]: I0223 13:27:51.419149 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzxjb" podUID="106bae0e-78dc-455a-bca3-35057d5a145a" containerName="ovn-controller" probeResult="failure" output=< Feb 23 13:27:51.419527 master-0 kubenswrapper[26474]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 13:27:51.419527 master-0 kubenswrapper[26474]: > Feb 23 13:27:51.781260 master-0 kubenswrapper[26474]: I0223 13:27:51.780230 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"6aa005804bd43000e012e9112f59b9f1fd73b5d15998054bc8e32bf3dfb275c8"} Feb 23 13:27:51.781260 master-0 kubenswrapper[26474]: I0223 13:27:51.780319 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"89ca932abb74b5fd76703e88d397b5158f9abf8556b673e94416a64727d4a95a"} Feb 23 13:27:51.781260 master-0 kubenswrapper[26474]: I0223 13:27:51.780334 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"1e73d62adab441af396896b107649c7138ebe35a2242a0881665c36e8bbbb837"} Feb 23 13:27:51.781260 master-0 kubenswrapper[26474]: I0223 13:27:51.780361 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"8784fdbb3678c70a57cce248ddf43bfa4619420cbb7638d0077109a5e9a83d72"} Feb 23 13:27:51.782636 master-0 kubenswrapper[26474]: I0223 13:27:51.782572 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tcp42" event={"ID":"29a70f49-894f-470f-bbe1-205e6714fe94","Type":"ContainerStarted","Data":"b619730592fa3e03603535d1beb302bb02b068ab9da5771140ea4a437e4f7064"} Feb 23 13:27:52.877195 master-0 kubenswrapper[26474]: I0223 13:27:52.877069 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rlgt9"] Feb 23 13:27:52.886493 master-0 kubenswrapper[26474]: I0223 13:27:52.886441 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rlgt9"] Feb 23 13:27:53.806691 master-0 kubenswrapper[26474]: I0223 13:27:53.806590 26474 generic.go:334] "Generic (PLEG): container finished" podID="08e48693-a2aa-426e-9718-5484046f9a4e" containerID="db122ba569957660a3e28b667ce2bd55ce0e307a2d5715926dec6313a158dbf4" exitCode=0 Feb 23 13:27:53.806938 master-0 kubenswrapper[26474]: I0223 13:27:53.806704 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"08e48693-a2aa-426e-9718-5484046f9a4e","Type":"ContainerDied","Data":"db122ba569957660a3e28b667ce2bd55ce0e307a2d5715926dec6313a158dbf4"} Feb 23 13:27:53.813542 master-0 kubenswrapper[26474]: I0223 13:27:53.813478 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"0813fb3d0fa34955bf4e33f0ddec1546e5337560eda2095949696d458fbf757d"} Feb 23 13:27:53.813640 master-0 kubenswrapper[26474]: I0223 13:27:53.813545 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"c6c7a2882e850b7556731f392dcec4ac85571e85eb64c70f57797c50008739c0"} Feb 23 13:27:53.813640 master-0 kubenswrapper[26474]: I0223 13:27:53.813561 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"b73bd9165dd5c1b4a7d0fcb14c3c8e405dc558b16f06e2cef8593e1432965d71"} Feb 23 13:27:53.813640 master-0 kubenswrapper[26474]: I0223 13:27:53.813574 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"c47f5b6ea3869d09894c856dee19b0816458a94a835f47705e61367ad9354b08"} Feb 23 13:27:54.418931 master-0 kubenswrapper[26474]: I0223 13:27:54.418852 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68992f03-040b-4bfd-88bf-38622d28c78c" path="/var/lib/kubelet/pods/68992f03-040b-4bfd-88bf-38622d28c78c/volumes" Feb 23 13:27:54.831114 master-0 kubenswrapper[26474]: I0223 13:27:54.831039 26474 generic.go:334] "Generic (PLEG): container finished" podID="9502e2b0-2a39-47b6-b482-f13048ccdf41" containerID="0cc1a2ffd006a8b964b71aa1ac5fa220e9afc8cf5659338483b95b0a2af6ee35" exitCode=0 Feb 23 13:27:54.831543 master-0 kubenswrapper[26474]: I0223 13:27:54.831136 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9502e2b0-2a39-47b6-b482-f13048ccdf41","Type":"ContainerDied","Data":"0cc1a2ffd006a8b964b71aa1ac5fa220e9afc8cf5659338483b95b0a2af6ee35"} Feb 23 13:27:54.836443 master-0 kubenswrapper[26474]: I0223 13:27:54.836368 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"08e48693-a2aa-426e-9718-5484046f9a4e","Type":"ContainerStarted","Data":"7e98699a388ec4cf44f168633ccedaa4e91dc9ecce57c3df600a6eb7ef0d4978"} Feb 23 13:27:54.836722 master-0 kubenswrapper[26474]: I0223 13:27:54.836679 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 13:27:54.911544 master-0 kubenswrapper[26474]: I0223 13:27:54.911444 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.527464801 podStartE2EDuration="1m0.911422591s" podCreationTimestamp="2026-02-23 13:26:54 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.605401816 +0000 UTC m=+753.451909493" lastFinishedPulling="2026-02-23 13:27:19.989359606 +0000 UTC m=+761.835867283" observedRunningTime="2026-02-23 13:27:54.904907453 +0000 UTC m=+796.751415130" watchObservedRunningTime="2026-02-23 13:27:54.911422591 +0000 UTC m=+796.757930258" Feb 23 13:27:55.853652 master-0 kubenswrapper[26474]: I0223 13:27:55.853571 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9502e2b0-2a39-47b6-b482-f13048ccdf41","Type":"ContainerStarted","Data":"ba1cfa241e3124adec2c47839723dd7aef373949e99ea308adcbb26ed7360d93"} Feb 23 13:27:55.855078 master-0 kubenswrapper[26474]: I0223 13:27:55.855027 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:27:55.867269 master-0 kubenswrapper[26474]: I0223 13:27:55.865893 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"5da8f92e7ac6c6ef967ba12df7d97123c48ba3b7531424f789875320d3ff3a86"} Feb 23 13:27:55.867461 master-0 kubenswrapper[26474]: I0223 13:27:55.867288 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"8b79fef0413b5f8e77bbbe949c78d0cd4fca8b3d0a73b30976c90ae645600c64"} Feb 23 13:27:55.948439 master-0 kubenswrapper[26474]: I0223 13:27:55.948246 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.620301884 podStartE2EDuration="1m1.948212423s" podCreationTimestamp="2026-02-23 13:26:54 +0000 UTC" firstStartedPulling="2026-02-23 13:27:11.82513503 +0000 UTC m=+753.671642707" lastFinishedPulling="2026-02-23 13:27:20.153045569 +0000 UTC m=+761.999553246" observedRunningTime="2026-02-23 13:27:55.943675563 +0000 UTC m=+797.790183310" watchObservedRunningTime="2026-02-23 13:27:55.948212423 +0000 UTC m=+797.794720100" Feb 23 13:27:56.382888 master-0 kubenswrapper[26474]: I0223 13:27:56.382809 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:56.383306 master-0 kubenswrapper[26474]: I0223 13:27:56.383000 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzxjb" podUID="106bae0e-78dc-455a-bca3-35057d5a145a" containerName="ovn-controller" probeResult="failure" output=< Feb 23 13:27:56.383306 master-0 kubenswrapper[26474]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 13:27:56.383306 master-0 kubenswrapper[26474]: > Feb 23 13:27:56.407971 master-0 kubenswrapper[26474]: I0223 13:27:56.407931 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlg7s" Feb 23 13:27:56.629148 master-0 kubenswrapper[26474]: I0223 13:27:56.627796 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qzxjb-config-2l5vs"] Feb 23 13:27:56.629846 master-0 kubenswrapper[26474]: E0223 13:27:56.629676 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f760f819-5bdd-4b3b-9374-7bde76377f34" containerName="swift-ring-rebalance" Feb 23 13:27:56.629846 master-0 kubenswrapper[26474]: I0223 13:27:56.629697 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="f760f819-5bdd-4b3b-9374-7bde76377f34" containerName="swift-ring-rebalance" Feb 23 13:27:56.629993 master-0 kubenswrapper[26474]: I0223 13:27:56.629908 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="f760f819-5bdd-4b3b-9374-7bde76377f34" containerName="swift-ring-rebalance" Feb 23 13:27:56.630670 master-0 kubenswrapper[26474]: I0223 13:27:56.630648 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.639098 master-0 kubenswrapper[26474]: I0223 13:27:56.637905 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 13:27:56.645797 master-0 kubenswrapper[26474]: I0223 13:27:56.645676 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb-config-2l5vs"] Feb 23 13:27:56.726523 master-0 kubenswrapper[26474]: I0223 13:27:56.726426 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.727063 master-0 kubenswrapper[26474]: I0223 13:27:56.727035 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.727247 master-0 kubenswrapper[26474]: I0223 13:27:56.727227 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.727438 master-0 kubenswrapper[26474]: I0223 13:27:56.727416 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzj46\" (UniqueName: \"kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.727602 master-0 kubenswrapper[26474]: I0223 13:27:56.727581 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.727745 master-0 kubenswrapper[26474]: I0223 13:27:56.727723 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.829611 master-0 kubenswrapper[26474]: I0223 13:27:56.829546 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.829834 master-0 kubenswrapper[26474]: I0223 13:27:56.829627 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.829834 master-0 kubenswrapper[26474]: I0223 13:27:56.829691 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzj46\" (UniqueName: \"kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.829834 master-0 kubenswrapper[26474]: I0223 13:27:56.829754 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.829834 master-0 kubenswrapper[26474]: I0223 13:27:56.829797 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.830369 master-0 kubenswrapper[26474]: I0223 13:27:56.830204 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.830369 master-0 kubenswrapper[26474]: I0223 13:27:56.830240 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.830526 master-0 kubenswrapper[26474]: I0223 13:27:56.830395 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.830526 master-0 kubenswrapper[26474]: I0223 13:27:56.830400 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.830661 master-0 kubenswrapper[26474]: I0223 13:27:56.830480 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.837729 master-0 kubenswrapper[26474]: I0223 13:27:56.837627 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.852614 master-0 kubenswrapper[26474]: I0223 13:27:56.846528 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzj46\" (UniqueName: \"kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46\") pod \"ovn-controller-qzxjb-config-2l5vs\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:56.981584 master-0 kubenswrapper[26474]: I0223 13:27:56.981436 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:27:57.941489 master-0 kubenswrapper[26474]: I0223 13:27:57.941408 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ljd8h"] Feb 23 13:27:57.943984 master-0 kubenswrapper[26474]: I0223 13:27:57.943927 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:57.946811 master-0 kubenswrapper[26474]: I0223 13:27:57.946523 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 13:27:57.955970 master-0 kubenswrapper[26474]: I0223 13:27:57.955880 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q26mx\" (UniqueName: \"kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:57.956157 master-0 kubenswrapper[26474]: I0223 13:27:57.955983 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:57.959362 master-0 kubenswrapper[26474]: I0223 13:27:57.959293 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ljd8h"] Feb 23 13:27:58.059204 master-0 kubenswrapper[26474]: I0223 13:27:58.058966 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q26mx\" (UniqueName: \"kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:58.059204 master-0 kubenswrapper[26474]: I0223 13:27:58.059069 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:58.060053 master-0 kubenswrapper[26474]: I0223 13:27:58.060004 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:58.083600 master-0 kubenswrapper[26474]: I0223 13:27:58.074031 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q26mx\" (UniqueName: \"kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx\") pod \"root-account-create-update-ljd8h\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " pod="openstack/root-account-create-update-ljd8h" Feb 23 13:27:58.279533 master-0 kubenswrapper[26474]: I0223 13:27:58.279469 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ljd8h" Feb 23 13:28:01.373258 master-0 kubenswrapper[26474]: I0223 13:28:01.373154 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzxjb" podUID="106bae0e-78dc-455a-bca3-35057d5a145a" containerName="ovn-controller" probeResult="failure" output=< Feb 23 13:28:01.373258 master-0 kubenswrapper[26474]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 13:28:01.373258 master-0 kubenswrapper[26474]: > Feb 23 13:28:03.433070 master-0 kubenswrapper[26474]: I0223 13:28:03.432979 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb-config-2l5vs"] Feb 23 13:28:03.486723 master-0 kubenswrapper[26474]: I0223 13:28:03.486657 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ljd8h"] Feb 23 13:28:03.975004 master-0 kubenswrapper[26474]: I0223 13:28:03.974937 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"54f698e98544c3af519f3620f985c463ea8a883a70cab61367343efae18edc98"} Feb 23 13:28:03.975004 master-0 kubenswrapper[26474]: I0223 13:28:03.974992 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"6db78e3ec863f35d2acce5c979d06b14a4be8b9a9251a36e9bf38ad9b97a4c16"} Feb 23 13:28:03.975004 master-0 kubenswrapper[26474]: I0223 13:28:03.975003 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"20d424974edd5ce6519c10fdd2f3ffe3f82c334eb8dd311ef3f7951a0ef7c0dd"} Feb 23 13:28:03.975004 master-0 kubenswrapper[26474]: I0223 13:28:03.975012 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"93aea02db65d17d84fbfff3092df5797c6774d2bcf3133ada77e6ede14ff0b92"} Feb 23 13:28:03.976700 master-0 kubenswrapper[26474]: I0223 13:28:03.976666 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tcp42" event={"ID":"29a70f49-894f-470f-bbe1-205e6714fe94","Type":"ContainerStarted","Data":"00e11b3dd36a1cdcaafd663ddab50cbcf45f412decf1170269872b9029d94ec9"} Feb 23 13:28:03.979369 master-0 kubenswrapper[26474]: I0223 13:28:03.979316 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ljd8h" event={"ID":"1d3b6be3-59de-456f-bf7e-2c40371e217e","Type":"ContainerStarted","Data":"1dcf37b03959432d7338c0a7fc55c7c5de968efd38b3e700fbd0c1379eac59bd"} Feb 23 13:28:03.979460 master-0 kubenswrapper[26474]: I0223 13:28:03.979398 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ljd8h" event={"ID":"1d3b6be3-59de-456f-bf7e-2c40371e217e","Type":"ContainerStarted","Data":"d65ec5c42db9fc4b7286828bd02e931feb2f6d0a8c501bad9957d4e657f7f86d"} Feb 23 13:28:03.981851 master-0 kubenswrapper[26474]: I0223 13:28:03.981794 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-2l5vs" event={"ID":"da25284d-2aa2-4fbb-93e8-011c6194ab72","Type":"ContainerStarted","Data":"77e045467cf762d3952cffb764755f74a6a6488fa78d84b6a9f31be34a7c7ba8"} Feb 23 13:28:03.981916 master-0 kubenswrapper[26474]: I0223 13:28:03.981851 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-2l5vs" event={"ID":"da25284d-2aa2-4fbb-93e8-011c6194ab72","Type":"ContainerStarted","Data":"70dfd6ec1ddc67137b26000179f47dd9d78e409a6619b5cc265bfa02411108ef"} Feb 23 13:28:04.016215 master-0 kubenswrapper[26474]: I0223 13:28:04.015728 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tcp42" podStartSLOduration=2.880517375 podStartE2EDuration="15.015661329s" podCreationTimestamp="2026-02-23 13:27:49 +0000 UTC" firstStartedPulling="2026-02-23 13:27:50.7612206 +0000 UTC m=+792.607728277" lastFinishedPulling="2026-02-23 13:28:02.896364544 +0000 UTC m=+804.742872231" observedRunningTime="2026-02-23 13:28:04.003316329 +0000 UTC m=+805.849824036" watchObservedRunningTime="2026-02-23 13:28:04.015661329 +0000 UTC m=+805.862169026" Feb 23 13:28:04.040749 master-0 kubenswrapper[26474]: I0223 13:28:04.040544 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ljd8h" podStartSLOduration=7.040515483 podStartE2EDuration="7.040515483s" podCreationTimestamp="2026-02-23 13:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:04.024267708 +0000 UTC m=+805.870775425" watchObservedRunningTime="2026-02-23 13:28:04.040515483 +0000 UTC m=+805.887023180" Feb 23 13:28:04.073989 master-0 kubenswrapper[26474]: I0223 13:28:04.073893 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qzxjb-config-2l5vs" podStartSLOduration=8.073860772 podStartE2EDuration="8.073860772s" podCreationTimestamp="2026-02-23 13:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:04.0647018 +0000 UTC m=+805.911209507" watchObservedRunningTime="2026-02-23 13:28:04.073860772 +0000 UTC m=+805.920368439" Feb 23 13:28:05.007722 master-0 kubenswrapper[26474]: I0223 13:28:05.007109 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7172c21a-db8e-428a-9a0c-5ef060abafd3","Type":"ContainerStarted","Data":"55d4d36d998e69e5658a34035918f45532b52d7205a6c869f4293aaa013da33b"} Feb 23 13:28:05.010759 master-0 kubenswrapper[26474]: I0223 13:28:05.010727 26474 generic.go:334] "Generic (PLEG): container finished" podID="1d3b6be3-59de-456f-bf7e-2c40371e217e" containerID="1dcf37b03959432d7338c0a7fc55c7c5de968efd38b3e700fbd0c1379eac59bd" exitCode=0 Feb 23 13:28:05.010898 master-0 kubenswrapper[26474]: I0223 13:28:05.010785 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ljd8h" event={"ID":"1d3b6be3-59de-456f-bf7e-2c40371e217e","Type":"ContainerDied","Data":"1dcf37b03959432d7338c0a7fc55c7c5de968efd38b3e700fbd0c1379eac59bd"} Feb 23 13:28:05.019738 master-0 kubenswrapper[26474]: I0223 13:28:05.019589 26474 generic.go:334] "Generic (PLEG): container finished" podID="da25284d-2aa2-4fbb-93e8-011c6194ab72" containerID="77e045467cf762d3952cffb764755f74a6a6488fa78d84b6a9f31be34a7c7ba8" exitCode=0 Feb 23 13:28:05.020610 master-0 kubenswrapper[26474]: I0223 13:28:05.020534 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-2l5vs" event={"ID":"da25284d-2aa2-4fbb-93e8-011c6194ab72","Type":"ContainerDied","Data":"77e045467cf762d3952cffb764755f74a6a6488fa78d84b6a9f31be34a7c7ba8"} Feb 23 13:28:05.078183 master-0 kubenswrapper[26474]: I0223 13:28:05.077992 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.436494173 podStartE2EDuration="34.07754536s" podCreationTimestamp="2026-02-23 13:27:31 +0000 UTC" firstStartedPulling="2026-02-23 13:27:49.673063921 +0000 UTC m=+791.519571598" lastFinishedPulling="2026-02-23 13:27:55.314115118 +0000 UTC m=+797.160622785" observedRunningTime="2026-02-23 13:28:05.056003757 +0000 UTC m=+806.902511544" watchObservedRunningTime="2026-02-23 13:28:05.07754536 +0000 UTC m=+806.924053047" Feb 23 13:28:05.471670 master-0 kubenswrapper[26474]: I0223 13:28:05.471397 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:05.473668 master-0 kubenswrapper[26474]: I0223 13:28:05.473636 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.476848 master-0 kubenswrapper[26474]: I0223 13:28:05.476809 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 13:28:05.485964 master-0 kubenswrapper[26474]: I0223 13:28:05.485902 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:05.564317 master-0 kubenswrapper[26474]: I0223 13:28:05.564239 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.564712 master-0 kubenswrapper[26474]: I0223 13:28:05.564524 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.564712 master-0 kubenswrapper[26474]: I0223 13:28:05.564563 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.564788 master-0 kubenswrapper[26474]: I0223 13:28:05.564754 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.564788 master-0 kubenswrapper[26474]: I0223 13:28:05.564783 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rb4l\" (UniqueName: \"kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.564850 master-0 kubenswrapper[26474]: I0223 13:28:05.564811 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.667911 master-0 kubenswrapper[26474]: I0223 13:28:05.667783 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.667911 master-0 kubenswrapper[26474]: I0223 13:28:05.667868 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rb4l\" (UniqueName: \"kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.667911 master-0 kubenswrapper[26474]: I0223 13:28:05.667894 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.668486 master-0 kubenswrapper[26474]: I0223 13:28:05.668016 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.668486 master-0 kubenswrapper[26474]: I0223 13:28:05.668062 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.668486 master-0 kubenswrapper[26474]: I0223 13:28:05.668085 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.668985 master-0 kubenswrapper[26474]: I0223 13:28:05.668924 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.668985 master-0 kubenswrapper[26474]: I0223 13:28:05.668920 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.669129 master-0 kubenswrapper[26474]: I0223 13:28:05.669026 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.669296 master-0 kubenswrapper[26474]: I0223 13:28:05.669248 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.669675 master-0 kubenswrapper[26474]: I0223 13:28:05.669626 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:05.803539 master-0 kubenswrapper[26474]: I0223 13:28:05.803406 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rb4l\" (UniqueName: \"kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l\") pod \"dnsmasq-dns-7d4b6db685-b664p\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:06.103431 master-0 kubenswrapper[26474]: I0223 13:28:06.102869 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:06.373571 master-0 kubenswrapper[26474]: I0223 13:28:06.373447 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qzxjb" Feb 23 13:28:06.628931 master-0 kubenswrapper[26474]: I0223 13:28:06.628818 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:28:06.706235 master-0 kubenswrapper[26474]: I0223 13:28:06.706160 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.706235 master-0 kubenswrapper[26474]: I0223 13:28:06.706237 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.706563 master-0 kubenswrapper[26474]: I0223 13:28:06.706268 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.706563 master-0 kubenswrapper[26474]: I0223 13:28:06.706313 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.706563 master-0 kubenswrapper[26474]: I0223 13:28:06.706402 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.706563 master-0 kubenswrapper[26474]: I0223 13:28:06.706471 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzj46\" (UniqueName: \"kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46\") pod \"da25284d-2aa2-4fbb-93e8-011c6194ab72\" (UID: \"da25284d-2aa2-4fbb-93e8-011c6194ab72\") " Feb 23 13:28:06.708724 master-0 kubenswrapper[26474]: I0223 13:28:06.708654 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run" (OuterVolumeSpecName: "var-run") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:06.708724 master-0 kubenswrapper[26474]: I0223 13:28:06.708671 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:06.708851 master-0 kubenswrapper[26474]: I0223 13:28:06.708727 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:06.714772 master-0 kubenswrapper[26474]: I0223 13:28:06.714705 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ljd8h" Feb 23 13:28:06.718524 master-0 kubenswrapper[26474]: I0223 13:28:06.709182 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:06.718632 master-0 kubenswrapper[26474]: I0223 13:28:06.709557 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts" (OuterVolumeSpecName: "scripts") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:06.718632 master-0 kubenswrapper[26474]: I0223 13:28:06.714848 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46" (OuterVolumeSpecName: "kube-api-access-gzj46") pod "da25284d-2aa2-4fbb-93e8-011c6194ab72" (UID: "da25284d-2aa2-4fbb-93e8-011c6194ab72"). InnerVolumeSpecName "kube-api-access-gzj46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:06.796238 master-0 kubenswrapper[26474]: W0223 13:28:06.796160 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec9002a_dd92_48b5_92b8_af64bf2871a5.slice/crio-5a6e5867f7510b8195e108dc851e432e7b1d14cfc01691cb4be12390ac7d281c WatchSource:0}: Error finding container 5a6e5867f7510b8195e108dc851e432e7b1d14cfc01691cb4be12390ac7d281c: Status 404 returned error can't find the container with id 5a6e5867f7510b8195e108dc851e432e7b1d14cfc01691cb4be12390ac7d281c Feb 23 13:28:06.805743 master-0 kubenswrapper[26474]: I0223 13:28:06.805658 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:06.808861 master-0 kubenswrapper[26474]: I0223 13:28:06.808792 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q26mx\" (UniqueName: \"kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx\") pod \"1d3b6be3-59de-456f-bf7e-2c40371e217e\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " Feb 23 13:28:06.808986 master-0 kubenswrapper[26474]: I0223 13:28:06.808909 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts\") pod \"1d3b6be3-59de-456f-bf7e-2c40371e217e\" (UID: \"1d3b6be3-59de-456f-bf7e-2c40371e217e\") " Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809874 26474 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809905 26474 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809917 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzj46\" (UniqueName: \"kubernetes.io/projected/da25284d-2aa2-4fbb-93e8-011c6194ab72-kube-api-access-gzj46\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809929 26474 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809941 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da25284d-2aa2-4fbb-93e8-011c6194ab72-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810033 master-0 kubenswrapper[26474]: I0223 13:28:06.809953 26474 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da25284d-2aa2-4fbb-93e8-011c6194ab72-var-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.810579 master-0 kubenswrapper[26474]: I0223 13:28:06.810537 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d3b6be3-59de-456f-bf7e-2c40371e217e" (UID: "1d3b6be3-59de-456f-bf7e-2c40371e217e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:06.814156 master-0 kubenswrapper[26474]: I0223 13:28:06.814110 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx" (OuterVolumeSpecName: "kube-api-access-q26mx") pod "1d3b6be3-59de-456f-bf7e-2c40371e217e" (UID: "1d3b6be3-59de-456f-bf7e-2c40371e217e"). InnerVolumeSpecName "kube-api-access-q26mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:06.912095 master-0 kubenswrapper[26474]: I0223 13:28:06.912017 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q26mx\" (UniqueName: \"kubernetes.io/projected/1d3b6be3-59de-456f-bf7e-2c40371e217e-kube-api-access-q26mx\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:06.912095 master-0 kubenswrapper[26474]: I0223 13:28:06.912080 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3b6be3-59de-456f-bf7e-2c40371e217e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:07.046320 master-0 kubenswrapper[26474]: I0223 13:28:07.046245 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerStarted","Data":"66fc1f7ca9f0029c171b3f83e7a41bcdb6ea0766df7fc5d07cb2ff6a40c4fb70"} Feb 23 13:28:07.046320 master-0 kubenswrapper[26474]: I0223 13:28:07.046302 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerStarted","Data":"5a6e5867f7510b8195e108dc851e432e7b1d14cfc01691cb4be12390ac7d281c"} Feb 23 13:28:07.051624 master-0 kubenswrapper[26474]: I0223 13:28:07.051591 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ljd8h" event={"ID":"1d3b6be3-59de-456f-bf7e-2c40371e217e","Type":"ContainerDied","Data":"d65ec5c42db9fc4b7286828bd02e931feb2f6d0a8c501bad9957d4e657f7f86d"} Feb 23 13:28:07.051624 master-0 kubenswrapper[26474]: I0223 13:28:07.051623 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d65ec5c42db9fc4b7286828bd02e931feb2f6d0a8c501bad9957d4e657f7f86d" Feb 23 13:28:07.051784 master-0 kubenswrapper[26474]: I0223 13:28:07.051650 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ljd8h" Feb 23 13:28:07.056732 master-0 kubenswrapper[26474]: I0223 13:28:07.056661 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-2l5vs" event={"ID":"da25284d-2aa2-4fbb-93e8-011c6194ab72","Type":"ContainerDied","Data":"70dfd6ec1ddc67137b26000179f47dd9d78e409a6619b5cc265bfa02411108ef"} Feb 23 13:28:07.056732 master-0 kubenswrapper[26474]: I0223 13:28:07.056704 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70dfd6ec1ddc67137b26000179f47dd9d78e409a6619b5cc265bfa02411108ef" Feb 23 13:28:07.056732 master-0 kubenswrapper[26474]: I0223 13:28:07.056712 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-2l5vs" Feb 23 13:28:07.789458 master-0 kubenswrapper[26474]: I0223 13:28:07.789387 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qzxjb-config-2l5vs"] Feb 23 13:28:07.802355 master-0 kubenswrapper[26474]: I0223 13:28:07.802284 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qzxjb-config-2l5vs"] Feb 23 13:28:07.862487 master-0 kubenswrapper[26474]: I0223 13:28:07.862383 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qzxjb-config-52fcc"] Feb 23 13:28:07.862988 master-0 kubenswrapper[26474]: E0223 13:28:07.862955 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da25284d-2aa2-4fbb-93e8-011c6194ab72" containerName="ovn-config" Feb 23 13:28:07.862988 master-0 kubenswrapper[26474]: I0223 13:28:07.862981 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="da25284d-2aa2-4fbb-93e8-011c6194ab72" containerName="ovn-config" Feb 23 13:28:07.863080 master-0 kubenswrapper[26474]: E0223 13:28:07.863023 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3b6be3-59de-456f-bf7e-2c40371e217e" containerName="mariadb-account-create-update" Feb 23 13:28:07.863080 master-0 kubenswrapper[26474]: I0223 13:28:07.863031 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3b6be3-59de-456f-bf7e-2c40371e217e" containerName="mariadb-account-create-update" Feb 23 13:28:07.863719 master-0 kubenswrapper[26474]: I0223 13:28:07.863693 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="da25284d-2aa2-4fbb-93e8-011c6194ab72" containerName="ovn-config" Feb 23 13:28:07.863776 master-0 kubenswrapper[26474]: I0223 13:28:07.863735 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3b6be3-59de-456f-bf7e-2c40371e217e" containerName="mariadb-account-create-update" Feb 23 13:28:07.864900 master-0 kubenswrapper[26474]: I0223 13:28:07.864862 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:07.875308 master-0 kubenswrapper[26474]: I0223 13:28:07.875240 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 13:28:07.902876 master-0 kubenswrapper[26474]: I0223 13:28:07.882088 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb-config-52fcc"] Feb 23 13:28:08.046501 master-0 kubenswrapper[26474]: I0223 13:28:08.046354 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7q7x\" (UniqueName: \"kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.046501 master-0 kubenswrapper[26474]: I0223 13:28:08.046431 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.046501 master-0 kubenswrapper[26474]: I0223 13:28:08.046488 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.046912 master-0 kubenswrapper[26474]: I0223 13:28:08.046656 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.046912 master-0 kubenswrapper[26474]: I0223 13:28:08.046773 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.047071 master-0 kubenswrapper[26474]: I0223 13:28:08.047039 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.069258 master-0 kubenswrapper[26474]: I0223 13:28:08.069187 26474 generic.go:334] "Generic (PLEG): container finished" podID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerID="66fc1f7ca9f0029c171b3f83e7a41bcdb6ea0766df7fc5d07cb2ff6a40c4fb70" exitCode=0 Feb 23 13:28:08.069258 master-0 kubenswrapper[26474]: I0223 13:28:08.069227 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerDied","Data":"66fc1f7ca9f0029c171b3f83e7a41bcdb6ea0766df7fc5d07cb2ff6a40c4fb70"} Feb 23 13:28:08.149237 master-0 kubenswrapper[26474]: I0223 13:28:08.149165 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149571 master-0 kubenswrapper[26474]: I0223 13:28:08.149310 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7q7x\" (UniqueName: \"kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149571 master-0 kubenswrapper[26474]: I0223 13:28:08.149362 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149571 master-0 kubenswrapper[26474]: I0223 13:28:08.149371 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149571 master-0 kubenswrapper[26474]: I0223 13:28:08.149398 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149571 master-0 kubenswrapper[26474]: I0223 13:28:08.149494 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149776 master-0 kubenswrapper[26474]: I0223 13:28:08.149646 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.149776 master-0 kubenswrapper[26474]: I0223 13:28:08.149708 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.150141 master-0 kubenswrapper[26474]: I0223 13:28:08.150044 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.150222 master-0 kubenswrapper[26474]: I0223 13:28:08.150201 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.152590 master-0 kubenswrapper[26474]: I0223 13:28:08.152546 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.183050 master-0 kubenswrapper[26474]: I0223 13:28:08.182955 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7q7x\" (UniqueName: \"kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x\") pod \"ovn-controller-qzxjb-config-52fcc\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.229047 master-0 kubenswrapper[26474]: I0223 13:28:08.228962 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:08.441669 master-0 kubenswrapper[26474]: I0223 13:28:08.441447 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da25284d-2aa2-4fbb-93e8-011c6194ab72" path="/var/lib/kubelet/pods/da25284d-2aa2-4fbb-93e8-011c6194ab72/volumes" Feb 23 13:28:08.769185 master-0 kubenswrapper[26474]: W0223 13:28:08.769100 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8babfbb4_1e3a_4a6a_ae17_2150bacdf981.slice/crio-3b3d2fde8a0dde229d3f2f5eae077bd211567d27c9466b7507f8368d077cfdbf WatchSource:0}: Error finding container 3b3d2fde8a0dde229d3f2f5eae077bd211567d27c9466b7507f8368d077cfdbf: Status 404 returned error can't find the container with id 3b3d2fde8a0dde229d3f2f5eae077bd211567d27c9466b7507f8368d077cfdbf Feb 23 13:28:08.769528 master-0 kubenswrapper[26474]: I0223 13:28:08.769381 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzxjb-config-52fcc"] Feb 23 13:28:09.086729 master-0 kubenswrapper[26474]: I0223 13:28:09.083617 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerStarted","Data":"f52ac843cd37fd6a277015924e303f909ddb46463aefbdea39afc30c2e6f99bd"} Feb 23 13:28:09.086729 master-0 kubenswrapper[26474]: I0223 13:28:09.084123 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:09.091253 master-0 kubenswrapper[26474]: I0223 13:28:09.091194 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-52fcc" event={"ID":"8babfbb4-1e3a-4a6a-ae17-2150bacdf981","Type":"ContainerStarted","Data":"d3cf12cf18cf168968de06a70ee50280cba60cef96bfb312ac9f806338aaad18"} Feb 23 13:28:09.091253 master-0 kubenswrapper[26474]: I0223 13:28:09.091246 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-52fcc" event={"ID":"8babfbb4-1e3a-4a6a-ae17-2150bacdf981","Type":"ContainerStarted","Data":"3b3d2fde8a0dde229d3f2f5eae077bd211567d27c9466b7507f8368d077cfdbf"} Feb 23 13:28:09.110375 master-0 kubenswrapper[26474]: I0223 13:28:09.110262 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" podStartSLOduration=4.1102457470000004 podStartE2EDuration="4.110245747s" podCreationTimestamp="2026-02-23 13:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:09.102503229 +0000 UTC m=+810.949010896" watchObservedRunningTime="2026-02-23 13:28:09.110245747 +0000 UTC m=+810.956753414" Feb 23 13:28:09.138164 master-0 kubenswrapper[26474]: I0223 13:28:09.138051 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qzxjb-config-52fcc" podStartSLOduration=2.138032501 podStartE2EDuration="2.138032501s" podCreationTimestamp="2026-02-23 13:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:09.123658743 +0000 UTC m=+810.970166420" watchObservedRunningTime="2026-02-23 13:28:09.138032501 +0000 UTC m=+810.984540178" Feb 23 13:28:10.102404 master-0 kubenswrapper[26474]: I0223 13:28:10.102304 26474 generic.go:334] "Generic (PLEG): container finished" podID="8babfbb4-1e3a-4a6a-ae17-2150bacdf981" containerID="d3cf12cf18cf168968de06a70ee50280cba60cef96bfb312ac9f806338aaad18" exitCode=0 Feb 23 13:28:10.103095 master-0 kubenswrapper[26474]: I0223 13:28:10.102379 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzxjb-config-52fcc" event={"ID":"8babfbb4-1e3a-4a6a-ae17-2150bacdf981","Type":"ContainerDied","Data":"d3cf12cf18cf168968de06a70ee50280cba60cef96bfb312ac9f806338aaad18"} Feb 23 13:28:10.608620 master-0 kubenswrapper[26474]: I0223 13:28:10.608540 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 13:28:10.976478 master-0 kubenswrapper[26474]: I0223 13:28:10.976407 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8mcwv"] Feb 23 13:28:10.978129 master-0 kubenswrapper[26474]: I0223 13:28:10.978096 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.012300 master-0 kubenswrapper[26474]: I0223 13:28:11.012091 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8mcwv"] Feb 23 13:28:11.082226 master-0 kubenswrapper[26474]: I0223 13:28:11.082148 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-81ea-account-create-update-j42mq"] Feb 23 13:28:11.084155 master-0 kubenswrapper[26474]: I0223 13:28:11.084115 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.090435 master-0 kubenswrapper[26474]: I0223 13:28:11.090128 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 13:28:11.094658 master-0 kubenswrapper[26474]: I0223 13:28:11.094596 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-81ea-account-create-update-j42mq"] Feb 23 13:28:11.127700 master-0 kubenswrapper[26474]: I0223 13:28:11.127519 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzg8\" (UniqueName: \"kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.127700 master-0 kubenswrapper[26474]: I0223 13:28:11.127674 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.235385 master-0 kubenswrapper[26474]: I0223 13:28:11.231915 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.235385 master-0 kubenswrapper[26474]: I0223 13:28:11.232070 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.235385 master-0 kubenswrapper[26474]: I0223 13:28:11.232177 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xt2\" (UniqueName: \"kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.235385 master-0 kubenswrapper[26474]: I0223 13:28:11.232451 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzg8\" (UniqueName: \"kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.235385 master-0 kubenswrapper[26474]: I0223 13:28:11.234107 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.259451 master-0 kubenswrapper[26474]: I0223 13:28:11.257721 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzg8\" (UniqueName: \"kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8\") pod \"cinder-db-create-8mcwv\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.263627 master-0 kubenswrapper[26474]: I0223 13:28:11.263573 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zkbd6"] Feb 23 13:28:11.268012 master-0 kubenswrapper[26474]: I0223 13:28:11.267945 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.272646 master-0 kubenswrapper[26474]: I0223 13:28:11.272015 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:28:11.275970 master-0 kubenswrapper[26474]: I0223 13:28:11.274078 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:28:11.275970 master-0 kubenswrapper[26474]: I0223 13:28:11.274369 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:28:11.279857 master-0 kubenswrapper[26474]: I0223 13:28:11.279784 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zkbd6"] Feb 23 13:28:11.302366 master-0 kubenswrapper[26474]: I0223 13:28:11.302286 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:11.337938 master-0 kubenswrapper[26474]: I0223 13:28:11.336835 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.337938 master-0 kubenswrapper[26474]: I0223 13:28:11.337057 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xt2\" (UniqueName: \"kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.341578 master-0 kubenswrapper[26474]: I0223 13:28:11.341112 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.411751 master-0 kubenswrapper[26474]: I0223 13:28:11.397491 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xt2\" (UniqueName: \"kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2\") pod \"cinder-81ea-account-create-update-j42mq\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.429784 master-0 kubenswrapper[26474]: I0223 13:28:11.428783 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-p4kq8"] Feb 23 13:28:11.434598 master-0 kubenswrapper[26474]: I0223 13:28:11.434208 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:11.434598 master-0 kubenswrapper[26474]: I0223 13:28:11.434425 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.444708 master-0 kubenswrapper[26474]: I0223 13:28:11.443510 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.444708 master-0 kubenswrapper[26474]: I0223 13:28:11.443606 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jggv\" (UniqueName: \"kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.444708 master-0 kubenswrapper[26474]: I0223 13:28:11.443715 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.479366 master-0 kubenswrapper[26474]: I0223 13:28:11.479286 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p4kq8"] Feb 23 13:28:11.487944 master-0 kubenswrapper[26474]: I0223 13:28:11.487876 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e436-account-create-update-975k9"] Feb 23 13:28:11.489519 master-0 kubenswrapper[26474]: I0223 13:28:11.489484 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.494714 master-0 kubenswrapper[26474]: I0223 13:28:11.491845 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 13:28:11.510274 master-0 kubenswrapper[26474]: I0223 13:28:11.510120 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e436-account-create-update-975k9"] Feb 23 13:28:11.548644 master-0 kubenswrapper[26474]: I0223 13:28:11.545831 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.548644 master-0 kubenswrapper[26474]: I0223 13:28:11.546012 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jggv\" (UniqueName: \"kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.548644 master-0 kubenswrapper[26474]: I0223 13:28:11.546096 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.548644 master-0 kubenswrapper[26474]: I0223 13:28:11.546132 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.548644 master-0 kubenswrapper[26474]: I0223 13:28:11.546202 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4h6t\" (UniqueName: \"kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.558467 master-0 kubenswrapper[26474]: I0223 13:28:11.550377 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.558467 master-0 kubenswrapper[26474]: I0223 13:28:11.550690 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.563243 master-0 kubenswrapper[26474]: I0223 13:28:11.563112 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jggv\" (UniqueName: \"kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv\") pod \"keystone-db-sync-zkbd6\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.582499 master-0 kubenswrapper[26474]: I0223 13:28:11.582454 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:11.630208 master-0 kubenswrapper[26474]: I0223 13:28:11.630148 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:11.651368 master-0 kubenswrapper[26474]: I0223 13:28:11.648451 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.651368 master-0 kubenswrapper[26474]: I0223 13:28:11.648550 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.651368 master-0 kubenswrapper[26474]: I0223 13:28:11.648612 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phn74\" (UniqueName: \"kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.651368 master-0 kubenswrapper[26474]: I0223 13:28:11.648636 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4h6t\" (UniqueName: \"kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.651368 master-0 kubenswrapper[26474]: I0223 13:28:11.649517 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.669729 master-0 kubenswrapper[26474]: I0223 13:28:11.669684 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4h6t\" (UniqueName: \"kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t\") pod \"neutron-db-create-p4kq8\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.749998 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750129 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run" (OuterVolumeSpecName: "var-run") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750161 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750193 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750232 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750256 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7q7x\" (UniqueName: \"kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750277 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750326 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts\") pod \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\" (UID: \"8babfbb4-1e3a-4a6a-ae17-2150bacdf981\") " Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750745 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.750891 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phn74\" (UniqueName: \"kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751011 26474 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751022 26474 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751099 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts" (OuterVolumeSpecName: "scripts") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751830 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751858 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:11.757369 master-0 kubenswrapper[26474]: I0223 13:28:11.751932 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.766013 master-0 kubenswrapper[26474]: I0223 13:28:11.763166 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x" (OuterVolumeSpecName: "kube-api-access-j7q7x") pod "8babfbb4-1e3a-4a6a-ae17-2150bacdf981" (UID: "8babfbb4-1e3a-4a6a-ae17-2150bacdf981"). InnerVolumeSpecName "kube-api-access-j7q7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:11.772136 master-0 kubenswrapper[26474]: I0223 13:28:11.772061 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phn74\" (UniqueName: \"kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74\") pod \"neutron-e436-account-create-update-975k9\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.857257 master-0 kubenswrapper[26474]: I0223 13:28:11.853497 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.857257 master-0 kubenswrapper[26474]: I0223 13:28:11.853545 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7q7x\" (UniqueName: \"kubernetes.io/projected/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-kube-api-access-j7q7x\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.857257 master-0 kubenswrapper[26474]: I0223 13:28:11.853556 26474 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.857257 master-0 kubenswrapper[26474]: I0223 13:28:11.853570 26474 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8babfbb4-1e3a-4a6a-ae17-2150bacdf981-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:11.868211 master-0 kubenswrapper[26474]: I0223 13:28:11.867099 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qzxjb-config-52fcc"] Feb 23 13:28:11.891431 master-0 kubenswrapper[26474]: I0223 13:28:11.880007 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:11.891431 master-0 kubenswrapper[26474]: I0223 13:28:11.883402 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qzxjb-config-52fcc"] Feb 23 13:28:11.906377 master-0 kubenswrapper[26474]: I0223 13:28:11.905308 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:11.909240 master-0 kubenswrapper[26474]: I0223 13:28:11.908553 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:28:11.941170 master-0 kubenswrapper[26474]: I0223 13:28:11.941112 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8mcwv"] Feb 23 13:28:12.039886 master-0 kubenswrapper[26474]: I0223 13:28:12.039832 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-81ea-account-create-update-j42mq"] Feb 23 13:28:12.136518 master-0 kubenswrapper[26474]: I0223 13:28:12.136482 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-81ea-account-create-update-j42mq" event={"ID":"391ff9cf-6e10-4566-8651-b240689eb1d5","Type":"ContainerStarted","Data":"eab01639d239cb1646254e7180dc3b27a6b89e08a6f87c3c3bdfa113b532c319"} Feb 23 13:28:12.181055 master-0 kubenswrapper[26474]: I0223 13:28:12.181014 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3d2fde8a0dde229d3f2f5eae077bd211567d27c9466b7507f8368d077cfdbf" Feb 23 13:28:12.181215 master-0 kubenswrapper[26474]: I0223 13:28:12.181086 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzxjb-config-52fcc" Feb 23 13:28:12.190796 master-0 kubenswrapper[26474]: I0223 13:28:12.190613 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8mcwv" event={"ID":"5661b4fe-3f9f-41cc-a335-ace88eda5968","Type":"ContainerStarted","Data":"c66250b3e53e07026962d7a3303656103970391cefd32fd7472523cf2df1c16b"} Feb 23 13:28:12.259309 master-0 kubenswrapper[26474]: I0223 13:28:12.259265 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zkbd6"] Feb 23 13:28:12.260789 master-0 kubenswrapper[26474]: W0223 13:28:12.259265 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d5d620a_05d5_4a5d_8e3b_596cf44d0713.slice/crio-177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac WatchSource:0}: Error finding container 177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac: Status 404 returned error can't find the container with id 177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac Feb 23 13:28:12.420298 master-0 kubenswrapper[26474]: I0223 13:28:12.420221 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8babfbb4-1e3a-4a6a-ae17-2150bacdf981" path="/var/lib/kubelet/pods/8babfbb4-1e3a-4a6a-ae17-2150bacdf981/volumes" Feb 23 13:28:12.635817 master-0 kubenswrapper[26474]: I0223 13:28:12.635750 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p4kq8"] Feb 23 13:28:12.683063 master-0 kubenswrapper[26474]: W0223 13:28:12.682943 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f17f46f_162c_49b6_88bb_d8792f7c354f.slice/crio-536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840 WatchSource:0}: Error finding container 536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840: Status 404 returned error can't find the container with id 536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840 Feb 23 13:28:12.710603 master-0 kubenswrapper[26474]: I0223 13:28:12.707886 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e436-account-create-update-975k9"] Feb 23 13:28:13.205847 master-0 kubenswrapper[26474]: I0223 13:28:13.205784 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4kq8" event={"ID":"fac8cfc7-d688-489f-ac4e-71f9363b2c5b","Type":"ContainerStarted","Data":"3e4c9951bae0e1ee2f2843f99fb2d6ca829e3a4b89825ad8a1902555eb8d2972"} Feb 23 13:28:13.205847 master-0 kubenswrapper[26474]: I0223 13:28:13.205842 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4kq8" event={"ID":"fac8cfc7-d688-489f-ac4e-71f9363b2c5b","Type":"ContainerStarted","Data":"d29908c2026c61425e30fa8637cb743544092f69e5030048ff143bcb4a7c5f1d"} Feb 23 13:28:13.210968 master-0 kubenswrapper[26474]: I0223 13:28:13.210929 26474 generic.go:334] "Generic (PLEG): container finished" podID="5661b4fe-3f9f-41cc-a335-ace88eda5968" containerID="92a3c46bf13eda14978b4a6ead51e6f4b591a00bacb4fc7a976903d0d6f88f97" exitCode=0 Feb 23 13:28:13.211095 master-0 kubenswrapper[26474]: I0223 13:28:13.211033 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8mcwv" event={"ID":"5661b4fe-3f9f-41cc-a335-ace88eda5968","Type":"ContainerDied","Data":"92a3c46bf13eda14978b4a6ead51e6f4b591a00bacb4fc7a976903d0d6f88f97"} Feb 23 13:28:13.221302 master-0 kubenswrapper[26474]: I0223 13:28:13.220308 26474 generic.go:334] "Generic (PLEG): container finished" podID="391ff9cf-6e10-4566-8651-b240689eb1d5" containerID="9a2f0232e11b32ae84668df0faaba00d2b144ab5b45e0c995e4e1871491f2198" exitCode=0 Feb 23 13:28:13.221302 master-0 kubenswrapper[26474]: I0223 13:28:13.220387 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-81ea-account-create-update-j42mq" event={"ID":"391ff9cf-6e10-4566-8651-b240689eb1d5","Type":"ContainerDied","Data":"9a2f0232e11b32ae84668df0faaba00d2b144ab5b45e0c995e4e1871491f2198"} Feb 23 13:28:13.226253 master-0 kubenswrapper[26474]: I0223 13:28:13.226162 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zkbd6" event={"ID":"4d5d620a-05d5-4a5d-8e3b-596cf44d0713","Type":"ContainerStarted","Data":"177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac"} Feb 23 13:28:13.227078 master-0 kubenswrapper[26474]: I0223 13:28:13.227017 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-p4kq8" podStartSLOduration=2.2270007769999998 podStartE2EDuration="2.227000777s" podCreationTimestamp="2026-02-23 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:13.222740023 +0000 UTC m=+815.069247700" watchObservedRunningTime="2026-02-23 13:28:13.227000777 +0000 UTC m=+815.073508454" Feb 23 13:28:13.229239 master-0 kubenswrapper[26474]: I0223 13:28:13.229165 26474 generic.go:334] "Generic (PLEG): container finished" podID="29a70f49-894f-470f-bbe1-205e6714fe94" containerID="00e11b3dd36a1cdcaafd663ddab50cbcf45f412decf1170269872b9029d94ec9" exitCode=0 Feb 23 13:28:13.229355 master-0 kubenswrapper[26474]: I0223 13:28:13.229268 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tcp42" event={"ID":"29a70f49-894f-470f-bbe1-205e6714fe94","Type":"ContainerDied","Data":"00e11b3dd36a1cdcaafd663ddab50cbcf45f412decf1170269872b9029d94ec9"} Feb 23 13:28:13.232983 master-0 kubenswrapper[26474]: I0223 13:28:13.232919 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e436-account-create-update-975k9" event={"ID":"8f17f46f-162c-49b6-88bb-d8792f7c354f","Type":"ContainerStarted","Data":"ddc71d47000b548986a8b06fa421d1fc629b520a000ec6a97445f4cc844f8577"} Feb 23 13:28:13.233132 master-0 kubenswrapper[26474]: I0223 13:28:13.232990 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e436-account-create-update-975k9" event={"ID":"8f17f46f-162c-49b6-88bb-d8792f7c354f","Type":"ContainerStarted","Data":"536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840"} Feb 23 13:28:13.262922 master-0 kubenswrapper[26474]: I0223 13:28:13.262825 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-e436-account-create-update-975k9" podStartSLOduration=2.262779038 podStartE2EDuration="2.262779038s" podCreationTimestamp="2026-02-23 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:13.249419283 +0000 UTC m=+815.095926960" watchObservedRunningTime="2026-02-23 13:28:13.262779038 +0000 UTC m=+815.109286715" Feb 23 13:28:14.247326 master-0 kubenswrapper[26474]: I0223 13:28:14.247096 26474 generic.go:334] "Generic (PLEG): container finished" podID="8f17f46f-162c-49b6-88bb-d8792f7c354f" containerID="ddc71d47000b548986a8b06fa421d1fc629b520a000ec6a97445f4cc844f8577" exitCode=0 Feb 23 13:28:14.247326 master-0 kubenswrapper[26474]: I0223 13:28:14.247200 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e436-account-create-update-975k9" event={"ID":"8f17f46f-162c-49b6-88bb-d8792f7c354f","Type":"ContainerDied","Data":"ddc71d47000b548986a8b06fa421d1fc629b520a000ec6a97445f4cc844f8577"} Feb 23 13:28:14.249853 master-0 kubenswrapper[26474]: I0223 13:28:14.249537 26474 generic.go:334] "Generic (PLEG): container finished" podID="fac8cfc7-d688-489f-ac4e-71f9363b2c5b" containerID="3e4c9951bae0e1ee2f2843f99fb2d6ca829e3a4b89825ad8a1902555eb8d2972" exitCode=0 Feb 23 13:28:14.249853 master-0 kubenswrapper[26474]: I0223 13:28:14.249596 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4kq8" event={"ID":"fac8cfc7-d688-489f-ac4e-71f9363b2c5b","Type":"ContainerDied","Data":"3e4c9951bae0e1ee2f2843f99fb2d6ca829e3a4b89825ad8a1902555eb8d2972"} Feb 23 13:28:16.105300 master-0 kubenswrapper[26474]: I0223 13:28:16.105233 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:16.216430 master-0 kubenswrapper[26474]: I0223 13:28:16.215073 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:28:16.216430 master-0 kubenswrapper[26474]: I0223 13:28:16.215386 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="dnsmasq-dns" containerID="cri-o://1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e" gracePeriod=10 Feb 23 13:28:16.568719 master-0 kubenswrapper[26474]: I0223 13:28:16.568638 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.178:5353: connect: connection refused" Feb 23 13:28:16.792879 master-0 kubenswrapper[26474]: I0223 13:28:16.792817 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:16.793294 master-0 kubenswrapper[26474]: I0223 13:28:16.793264 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tcp42" Feb 23 13:28:16.802904 master-0 kubenswrapper[26474]: I0223 13:28:16.802797 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:16.837411 master-0 kubenswrapper[26474]: I0223 13:28:16.831782 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:16.859237 master-0 kubenswrapper[26474]: I0223 13:28:16.859183 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:16.883334 master-0 kubenswrapper[26474]: I0223 13:28:16.883265 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts\") pod \"5661b4fe-3f9f-41cc-a335-ace88eda5968\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " Feb 23 13:28:16.883538 master-0 kubenswrapper[26474]: I0223 13:28:16.883473 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle\") pod \"29a70f49-894f-470f-bbe1-205e6714fe94\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " Feb 23 13:28:16.883739 master-0 kubenswrapper[26474]: I0223 13:28:16.883705 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvjk\" (UniqueName: \"kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk\") pod \"29a70f49-894f-470f-bbe1-205e6714fe94\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " Feb 23 13:28:16.883804 master-0 kubenswrapper[26474]: I0223 13:28:16.883739 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzg8\" (UniqueName: \"kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8\") pod \"5661b4fe-3f9f-41cc-a335-ace88eda5968\" (UID: \"5661b4fe-3f9f-41cc-a335-ace88eda5968\") " Feb 23 13:28:16.883804 master-0 kubenswrapper[26474]: I0223 13:28:16.883787 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data\") pod \"29a70f49-894f-470f-bbe1-205e6714fe94\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " Feb 23 13:28:16.883872 master-0 kubenswrapper[26474]: I0223 13:28:16.883817 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data\") pod \"29a70f49-894f-470f-bbe1-205e6714fe94\" (UID: \"29a70f49-894f-470f-bbe1-205e6714fe94\") " Feb 23 13:28:16.886761 master-0 kubenswrapper[26474]: I0223 13:28:16.886714 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5661b4fe-3f9f-41cc-a335-ace88eda5968" (UID: "5661b4fe-3f9f-41cc-a335-ace88eda5968"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:16.888495 master-0 kubenswrapper[26474]: I0223 13:28:16.888417 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "29a70f49-894f-470f-bbe1-205e6714fe94" (UID: "29a70f49-894f-470f-bbe1-205e6714fe94"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:16.888851 master-0 kubenswrapper[26474]: I0223 13:28:16.888768 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk" (OuterVolumeSpecName: "kube-api-access-xlvjk") pod "29a70f49-894f-470f-bbe1-205e6714fe94" (UID: "29a70f49-894f-470f-bbe1-205e6714fe94"). InnerVolumeSpecName "kube-api-access-xlvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:16.888937 master-0 kubenswrapper[26474]: I0223 13:28:16.888897 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8" (OuterVolumeSpecName: "kube-api-access-lvzg8") pod "5661b4fe-3f9f-41cc-a335-ace88eda5968" (UID: "5661b4fe-3f9f-41cc-a335-ace88eda5968"). InnerVolumeSpecName "kube-api-access-lvzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:16.926698 master-0 kubenswrapper[26474]: I0223 13:28:16.926611 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29a70f49-894f-470f-bbe1-205e6714fe94" (UID: "29a70f49-894f-470f-bbe1-205e6714fe94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:16.964416 master-0 kubenswrapper[26474]: I0223 13:28:16.962781 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:28:16.988748 master-0 kubenswrapper[26474]: I0223 13:28:16.988655 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data" (OuterVolumeSpecName: "config-data") pod "29a70f49-894f-470f-bbe1-205e6714fe94" (UID: "29a70f49-894f-470f-bbe1-205e6714fe94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:16.989868 master-0 kubenswrapper[26474]: I0223 13:28:16.989838 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5xt2\" (UniqueName: \"kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2\") pod \"391ff9cf-6e10-4566-8651-b240689eb1d5\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " Feb 23 13:28:16.989987 master-0 kubenswrapper[26474]: I0223 13:28:16.989959 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phn74\" (UniqueName: \"kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74\") pod \"8f17f46f-162c-49b6-88bb-d8792f7c354f\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " Feb 23 13:28:16.990462 master-0 kubenswrapper[26474]: I0223 13:28:16.990433 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4h6t\" (UniqueName: \"kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t\") pod \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " Feb 23 13:28:16.990499 master-0 kubenswrapper[26474]: I0223 13:28:16.990468 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts\") pod \"8f17f46f-162c-49b6-88bb-d8792f7c354f\" (UID: \"8f17f46f-162c-49b6-88bb-d8792f7c354f\") " Feb 23 13:28:16.990538 master-0 kubenswrapper[26474]: I0223 13:28:16.990517 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts\") pod \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\" (UID: \"fac8cfc7-d688-489f-ac4e-71f9363b2c5b\") " Feb 23 13:28:16.990654 master-0 kubenswrapper[26474]: I0223 13:28:16.990626 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts\") pod \"391ff9cf-6e10-4566-8651-b240689eb1d5\" (UID: \"391ff9cf-6e10-4566-8651-b240689eb1d5\") " Feb 23 13:28:16.992726 master-0 kubenswrapper[26474]: I0223 13:28:16.992694 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzg8\" (UniqueName: \"kubernetes.io/projected/5661b4fe-3f9f-41cc-a335-ace88eda5968-kube-api-access-lvzg8\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.992726 master-0 kubenswrapper[26474]: I0223 13:28:16.992722 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlvjk\" (UniqueName: \"kubernetes.io/projected/29a70f49-894f-470f-bbe1-205e6714fe94-kube-api-access-xlvjk\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.992814 master-0 kubenswrapper[26474]: I0223 13:28:16.992733 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.992814 master-0 kubenswrapper[26474]: I0223 13:28:16.992745 26474 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.992814 master-0 kubenswrapper[26474]: I0223 13:28:16.992754 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5661b4fe-3f9f-41cc-a335-ace88eda5968-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.992814 master-0 kubenswrapper[26474]: I0223 13:28:16.992764 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a70f49-894f-470f-bbe1-205e6714fe94-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:16.993204 master-0 kubenswrapper[26474]: I0223 13:28:16.993170 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "391ff9cf-6e10-4566-8651-b240689eb1d5" (UID: "391ff9cf-6e10-4566-8651-b240689eb1d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.000769 master-0 kubenswrapper[26474]: I0223 13:28:16.998675 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2" (OuterVolumeSpecName: "kube-api-access-m5xt2") pod "391ff9cf-6e10-4566-8651-b240689eb1d5" (UID: "391ff9cf-6e10-4566-8651-b240689eb1d5"). InnerVolumeSpecName "kube-api-access-m5xt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:17.006202 master-0 kubenswrapper[26474]: I0223 13:28:17.006016 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f17f46f-162c-49b6-88bb-d8792f7c354f" (UID: "8f17f46f-162c-49b6-88bb-d8792f7c354f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.009269 master-0 kubenswrapper[26474]: I0223 13:28:17.009205 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t" (OuterVolumeSpecName: "kube-api-access-w4h6t") pod "fac8cfc7-d688-489f-ac4e-71f9363b2c5b" (UID: "fac8cfc7-d688-489f-ac4e-71f9363b2c5b"). InnerVolumeSpecName "kube-api-access-w4h6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:17.018039 master-0 kubenswrapper[26474]: I0223 13:28:17.017959 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fac8cfc7-d688-489f-ac4e-71f9363b2c5b" (UID: "fac8cfc7-d688-489f-ac4e-71f9363b2c5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.021764 master-0 kubenswrapper[26474]: I0223 13:28:17.019797 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74" (OuterVolumeSpecName: "kube-api-access-phn74") pod "8f17f46f-162c-49b6-88bb-d8792f7c354f" (UID: "8f17f46f-162c-49b6-88bb-d8792f7c354f"). InnerVolumeSpecName "kube-api-access-phn74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:17.095293 master-0 kubenswrapper[26474]: I0223 13:28:17.095156 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config\") pod \"018c5ca6-eb74-483c-9079-6463547e3a46\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " Feb 23 13:28:17.095556 master-0 kubenswrapper[26474]: I0223 13:28:17.095379 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th9f9\" (UniqueName: \"kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9\") pod \"018c5ca6-eb74-483c-9079-6463547e3a46\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " Feb 23 13:28:17.095791 master-0 kubenswrapper[26474]: I0223 13:28:17.095715 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc\") pod \"018c5ca6-eb74-483c-9079-6463547e3a46\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " Feb 23 13:28:17.095902 master-0 kubenswrapper[26474]: I0223 13:28:17.095869 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb\") pod \"018c5ca6-eb74-483c-9079-6463547e3a46\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " Feb 23 13:28:17.096359 master-0 kubenswrapper[26474]: I0223 13:28:17.096088 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb\") pod \"018c5ca6-eb74-483c-9079-6463547e3a46\" (UID: \"018c5ca6-eb74-483c-9079-6463547e3a46\") " Feb 23 13:28:17.097036 master-0 kubenswrapper[26474]: I0223 13:28:17.097003 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391ff9cf-6e10-4566-8651-b240689eb1d5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.097084 master-0 kubenswrapper[26474]: I0223 13:28:17.097035 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5xt2\" (UniqueName: \"kubernetes.io/projected/391ff9cf-6e10-4566-8651-b240689eb1d5-kube-api-access-m5xt2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.097084 master-0 kubenswrapper[26474]: I0223 13:28:17.097052 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phn74\" (UniqueName: \"kubernetes.io/projected/8f17f46f-162c-49b6-88bb-d8792f7c354f-kube-api-access-phn74\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.097084 master-0 kubenswrapper[26474]: I0223 13:28:17.097066 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4h6t\" (UniqueName: \"kubernetes.io/projected/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-kube-api-access-w4h6t\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.097084 master-0 kubenswrapper[26474]: I0223 13:28:17.097080 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17f46f-162c-49b6-88bb-d8792f7c354f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.097206 master-0 kubenswrapper[26474]: I0223 13:28:17.097092 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac8cfc7-d688-489f-ac4e-71f9363b2c5b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.103701 master-0 kubenswrapper[26474]: I0223 13:28:17.103620 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9" (OuterVolumeSpecName: "kube-api-access-th9f9") pod "018c5ca6-eb74-483c-9079-6463547e3a46" (UID: "018c5ca6-eb74-483c-9079-6463547e3a46"). InnerVolumeSpecName "kube-api-access-th9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:17.140951 master-0 kubenswrapper[26474]: I0223 13:28:17.140827 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "018c5ca6-eb74-483c-9079-6463547e3a46" (UID: "018c5ca6-eb74-483c-9079-6463547e3a46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.144103 master-0 kubenswrapper[26474]: I0223 13:28:17.144039 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config" (OuterVolumeSpecName: "config") pod "018c5ca6-eb74-483c-9079-6463547e3a46" (UID: "018c5ca6-eb74-483c-9079-6463547e3a46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.152037 master-0 kubenswrapper[26474]: I0223 13:28:17.151970 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "018c5ca6-eb74-483c-9079-6463547e3a46" (UID: "018c5ca6-eb74-483c-9079-6463547e3a46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.170210 master-0 kubenswrapper[26474]: I0223 13:28:17.170100 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "018c5ca6-eb74-483c-9079-6463547e3a46" (UID: "018c5ca6-eb74-483c-9079-6463547e3a46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17.198720 master-0 kubenswrapper[26474]: I0223 13:28:17.198640 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.198720 master-0 kubenswrapper[26474]: I0223 13:28:17.198687 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.198720 master-0 kubenswrapper[26474]: I0223 13:28:17.198700 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.198720 master-0 kubenswrapper[26474]: I0223 13:28:17.198712 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018c5ca6-eb74-483c-9079-6463547e3a46-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.198720 master-0 kubenswrapper[26474]: I0223 13:28:17.198724 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th9f9\" (UniqueName: \"kubernetes.io/projected/018c5ca6-eb74-483c-9079-6463547e3a46-kube-api-access-th9f9\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:17.284845 master-0 kubenswrapper[26474]: I0223 13:28:17.284784 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-81ea-account-create-update-j42mq" Feb 23 13:28:17.284845 master-0 kubenswrapper[26474]: I0223 13:28:17.284770 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-81ea-account-create-update-j42mq" event={"ID":"391ff9cf-6e10-4566-8651-b240689eb1d5","Type":"ContainerDied","Data":"eab01639d239cb1646254e7180dc3b27a6b89e08a6f87c3c3bdfa113b532c319"} Feb 23 13:28:17.284845 master-0 kubenswrapper[26474]: I0223 13:28:17.284841 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab01639d239cb1646254e7180dc3b27a6b89e08a6f87c3c3bdfa113b532c319" Feb 23 13:28:17.286241 master-0 kubenswrapper[26474]: I0223 13:28:17.286202 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zkbd6" event={"ID":"4d5d620a-05d5-4a5d-8e3b-596cf44d0713","Type":"ContainerStarted","Data":"19f9dc6d5fbc07d9f0abac4920d0888b0a6272322d6423ad80d1f37b75508fb2"} Feb 23 13:28:17.298603 master-0 kubenswrapper[26474]: I0223 13:28:17.296704 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tcp42" event={"ID":"29a70f49-894f-470f-bbe1-205e6714fe94","Type":"ContainerDied","Data":"b619730592fa3e03603535d1beb302bb02b068ab9da5771140ea4a437e4f7064"} Feb 23 13:28:17.298603 master-0 kubenswrapper[26474]: I0223 13:28:17.296797 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b619730592fa3e03603535d1beb302bb02b068ab9da5771140ea4a437e4f7064" Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299175 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tcp42" Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299532 26474 generic.go:334] "Generic (PLEG): container finished" podID="018c5ca6-eb74-483c-9079-6463547e3a46" containerID="1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e" exitCode=0 Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299603 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" event={"ID":"018c5ca6-eb74-483c-9079-6463547e3a46","Type":"ContainerDied","Data":"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e"} Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299633 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" event={"ID":"018c5ca6-eb74-483c-9079-6463547e3a46","Type":"ContainerDied","Data":"bba42275a22ff580154a84cf7eb3657f16c880ac26ea5115f1b0d361fbf24481"} Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299651 26474 scope.go:117] "RemoveContainer" containerID="1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e" Feb 23 13:28:17.300330 master-0 kubenswrapper[26474]: I0223 13:28:17.299768 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-mfwb8" Feb 23 13:28:17.315016 master-0 kubenswrapper[26474]: I0223 13:28:17.314954 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e436-account-create-update-975k9" event={"ID":"8f17f46f-162c-49b6-88bb-d8792f7c354f","Type":"ContainerDied","Data":"536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840"} Feb 23 13:28:17.315016 master-0 kubenswrapper[26474]: I0223 13:28:17.315016 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536ea9b12477403c2b852eb0695e5b38535825d857549e00e1f90e45357b5840" Feb 23 13:28:17.316302 master-0 kubenswrapper[26474]: I0223 13:28:17.314972 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e436-account-create-update-975k9" Feb 23 13:28:17.320399 master-0 kubenswrapper[26474]: I0223 13:28:17.319911 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4kq8" Feb 23 13:28:17.321852 master-0 kubenswrapper[26474]: I0223 13:28:17.321610 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4kq8" event={"ID":"fac8cfc7-d688-489f-ac4e-71f9363b2c5b","Type":"ContainerDied","Data":"d29908c2026c61425e30fa8637cb743544092f69e5030048ff143bcb4a7c5f1d"} Feb 23 13:28:17.321852 master-0 kubenswrapper[26474]: I0223 13:28:17.321684 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29908c2026c61425e30fa8637cb743544092f69e5030048ff143bcb4a7c5f1d" Feb 23 13:28:17.325232 master-0 kubenswrapper[26474]: I0223 13:28:17.324468 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8mcwv" event={"ID":"5661b4fe-3f9f-41cc-a335-ace88eda5968","Type":"ContainerDied","Data":"c66250b3e53e07026962d7a3303656103970391cefd32fd7472523cf2df1c16b"} Feb 23 13:28:17.325232 master-0 kubenswrapper[26474]: I0223 13:28:17.324521 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c66250b3e53e07026962d7a3303656103970391cefd32fd7472523cf2df1c16b" Feb 23 13:28:17.325232 master-0 kubenswrapper[26474]: I0223 13:28:17.324546 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8mcwv" Feb 23 13:28:17.330158 master-0 kubenswrapper[26474]: I0223 13:28:17.330126 26474 scope.go:117] "RemoveContainer" containerID="ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d" Feb 23 13:28:17.364362 master-0 kubenswrapper[26474]: I0223 13:28:17.359133 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zkbd6" podStartSLOduration=2.009217855 podStartE2EDuration="6.359113911s" podCreationTimestamp="2026-02-23 13:28:11 +0000 UTC" firstStartedPulling="2026-02-23 13:28:12.261626598 +0000 UTC m=+814.108134275" lastFinishedPulling="2026-02-23 13:28:16.611522644 +0000 UTC m=+818.458030331" observedRunningTime="2026-02-23 13:28:17.310015884 +0000 UTC m=+819.156523561" watchObservedRunningTime="2026-02-23 13:28:17.359113911 +0000 UTC m=+819.205621588" Feb 23 13:28:17.370357 master-0 kubenswrapper[26474]: I0223 13:28:17.366444 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:28:17.379444 master-0 kubenswrapper[26474]: I0223 13:28:17.379309 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-mfwb8"] Feb 23 13:28:17.382784 master-0 kubenswrapper[26474]: I0223 13:28:17.382084 26474 scope.go:117] "RemoveContainer" containerID="1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e" Feb 23 13:28:17.382784 master-0 kubenswrapper[26474]: E0223 13:28:17.382730 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e\": container with ID starting with 1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e not found: ID does not exist" containerID="1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e" Feb 23 13:28:17.383004 master-0 kubenswrapper[26474]: I0223 13:28:17.382772 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e"} err="failed to get container status \"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e\": rpc error: code = NotFound desc = could not find container \"1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e\": container with ID starting with 1a14818296debaaf526d24229ccd48c10cbc7900d6a6f273888f9414b3004d9e not found: ID does not exist" Feb 23 13:28:17.383004 master-0 kubenswrapper[26474]: I0223 13:28:17.382802 26474 scope.go:117] "RemoveContainer" containerID="ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d" Feb 23 13:28:17.383327 master-0 kubenswrapper[26474]: E0223 13:28:17.383293 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d\": container with ID starting with ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d not found: ID does not exist" containerID="ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d" Feb 23 13:28:17.383396 master-0 kubenswrapper[26474]: I0223 13:28:17.383322 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d"} err="failed to get container status \"ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d\": rpc error: code = NotFound desc = could not find container \"ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d\": container with ID starting with ea1a2349542381cbca02f3a6e1c7c9df034319347cea78408bf0876b3c04760d not found: ID does not exist" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.271448 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272043 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="dnsmasq-dns" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272059 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="dnsmasq-dns" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272080 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f17f46f-162c-49b6-88bb-d8792f7c354f" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272090 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f17f46f-162c-49b6-88bb-d8792f7c354f" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272112 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661b4fe-3f9f-41cc-a335-ace88eda5968" containerName="mariadb-database-create" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272122 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661b4fe-3f9f-41cc-a335-ace88eda5968" containerName="mariadb-database-create" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272144 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="init" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272150 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="init" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272171 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a70f49-894f-470f-bbe1-205e6714fe94" containerName="glance-db-sync" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272180 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a70f49-894f-470f-bbe1-205e6714fe94" containerName="glance-db-sync" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272199 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391ff9cf-6e10-4566-8651-b240689eb1d5" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272206 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="391ff9cf-6e10-4566-8651-b240689eb1d5" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272215 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac8cfc7-d688-489f-ac4e-71f9363b2c5b" containerName="mariadb-database-create" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272222 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac8cfc7-d688-489f-ac4e-71f9363b2c5b" containerName="mariadb-database-create" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: E0223 13:28:18.272232 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8babfbb4-1e3a-4a6a-ae17-2150bacdf981" containerName="ovn-config" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272239 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8babfbb4-1e3a-4a6a-ae17-2150bacdf981" containerName="ovn-config" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272501 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="8babfbb4-1e3a-4a6a-ae17-2150bacdf981" containerName="ovn-config" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272529 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="391ff9cf-6e10-4566-8651-b240689eb1d5" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272544 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac8cfc7-d688-489f-ac4e-71f9363b2c5b" containerName="mariadb-database-create" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272554 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a70f49-894f-470f-bbe1-205e6714fe94" containerName="glance-db-sync" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272563 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f17f46f-162c-49b6-88bb-d8792f7c354f" containerName="mariadb-account-create-update" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272576 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" containerName="dnsmasq-dns" Feb 23 13:28:18.274718 master-0 kubenswrapper[26474]: I0223 13:28:18.272595 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5661b4fe-3f9f-41cc-a335-ace88eda5968" containerName="mariadb-database-create" Feb 23 13:28:18.287155 master-0 kubenswrapper[26474]: I0223 13:28:18.277289 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.387397 master-0 kubenswrapper[26474]: I0223 13:28:18.387251 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:18.427526 master-0 kubenswrapper[26474]: I0223 13:28:18.427454 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018c5ca6-eb74-483c-9079-6463547e3a46" path="/var/lib/kubelet/pods/018c5ca6-eb74-483c-9079-6463547e3a46/volumes" Feb 23 13:28:18.431055 master-0 kubenswrapper[26474]: I0223 13:28:18.431003 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.431141 master-0 kubenswrapper[26474]: I0223 13:28:18.431061 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.431181 master-0 kubenswrapper[26474]: I0223 13:28:18.431137 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.431282 master-0 kubenswrapper[26474]: I0223 13:28:18.431232 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.431521 master-0 kubenswrapper[26474]: I0223 13:28:18.431496 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.431683 master-0 kubenswrapper[26474]: I0223 13:28:18.431652 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tmd\" (UniqueName: \"kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.535452 master-0 kubenswrapper[26474]: I0223 13:28:18.534375 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.535452 master-0 kubenswrapper[26474]: I0223 13:28:18.535095 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tmd\" (UniqueName: \"kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.535452 master-0 kubenswrapper[26474]: I0223 13:28:18.535293 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.535452 master-0 kubenswrapper[26474]: I0223 13:28:18.535354 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.535452 master-0 kubenswrapper[26474]: I0223 13:28:18.535396 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.536067 master-0 kubenswrapper[26474]: I0223 13:28:18.535491 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.536067 master-0 kubenswrapper[26474]: I0223 13:28:18.535947 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.536244 master-0 kubenswrapper[26474]: I0223 13:28:18.536161 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.536244 master-0 kubenswrapper[26474]: I0223 13:28:18.536177 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.541359 master-0 kubenswrapper[26474]: I0223 13:28:18.536757 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.541359 master-0 kubenswrapper[26474]: I0223 13:28:18.537212 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.557417 master-0 kubenswrapper[26474]: I0223 13:28:18.552620 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tmd\" (UniqueName: \"kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd\") pod \"dnsmasq-dns-7ffb5d646f-ncmzp\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:18.637767 master-0 kubenswrapper[26474]: I0223 13:28:18.637600 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:19.099515 master-0 kubenswrapper[26474]: W0223 13:28:19.099328 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e1286f_c726_4775_9da2_acb19ace1a0f.slice/crio-b5a04d8fa089a0a24550075d2095cbb3993aca8884c5f3551d158ecf7603e676 WatchSource:0}: Error finding container b5a04d8fa089a0a24550075d2095cbb3993aca8884c5f3551d158ecf7603e676: Status 404 returned error can't find the container with id b5a04d8fa089a0a24550075d2095cbb3993aca8884c5f3551d158ecf7603e676 Feb 23 13:28:19.103854 master-0 kubenswrapper[26474]: I0223 13:28:19.103802 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:19.363536 master-0 kubenswrapper[26474]: I0223 13:28:19.363408 26474 generic.go:334] "Generic (PLEG): container finished" podID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerID="3d2beddc4176de4160301ef44f20241525cae5a42e8720e668be27a67a89d2ac" exitCode=0 Feb 23 13:28:19.363536 master-0 kubenswrapper[26474]: I0223 13:28:19.363470 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" event={"ID":"38e1286f-c726-4775-9da2-acb19ace1a0f","Type":"ContainerDied","Data":"3d2beddc4176de4160301ef44f20241525cae5a42e8720e668be27a67a89d2ac"} Feb 23 13:28:19.364034 master-0 kubenswrapper[26474]: I0223 13:28:19.363551 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" event={"ID":"38e1286f-c726-4775-9da2-acb19ace1a0f","Type":"ContainerStarted","Data":"b5a04d8fa089a0a24550075d2095cbb3993aca8884c5f3551d158ecf7603e676"} Feb 23 13:28:20.377680 master-0 kubenswrapper[26474]: I0223 13:28:20.377596 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" event={"ID":"38e1286f-c726-4775-9da2-acb19ace1a0f","Type":"ContainerStarted","Data":"17dff80672e1328e04f7b2504e9d2f193866400ca5059cb4856696c2eef5fd7b"} Feb 23 13:28:20.378229 master-0 kubenswrapper[26474]: I0223 13:28:20.377750 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:20.402509 master-0 kubenswrapper[26474]: I0223 13:28:20.402408 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" podStartSLOduration=2.402392298 podStartE2EDuration="2.402392298s" podCreationTimestamp="2026-02-23 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:20.399356644 +0000 UTC m=+822.245864331" watchObservedRunningTime="2026-02-23 13:28:20.402392298 +0000 UTC m=+822.248899975" Feb 23 13:28:22.412010 master-0 kubenswrapper[26474]: I0223 13:28:22.411933 26474 generic.go:334] "Generic (PLEG): container finished" podID="4d5d620a-05d5-4a5d-8e3b-596cf44d0713" containerID="19f9dc6d5fbc07d9f0abac4920d0888b0a6272322d6423ad80d1f37b75508fb2" exitCode=0 Feb 23 13:28:22.421592 master-0 kubenswrapper[26474]: I0223 13:28:22.421482 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zkbd6" event={"ID":"4d5d620a-05d5-4a5d-8e3b-596cf44d0713","Type":"ContainerDied","Data":"19f9dc6d5fbc07d9f0abac4920d0888b0a6272322d6423ad80d1f37b75508fb2"} Feb 23 13:28:23.859833 master-0 kubenswrapper[26474]: I0223 13:28:23.859759 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:23.967722 master-0 kubenswrapper[26474]: I0223 13:28:23.967658 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data\") pod \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " Feb 23 13:28:23.967972 master-0 kubenswrapper[26474]: I0223 13:28:23.967751 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle\") pod \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " Feb 23 13:28:23.967972 master-0 kubenswrapper[26474]: I0223 13:28:23.967904 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jggv\" (UniqueName: \"kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv\") pod \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\" (UID: \"4d5d620a-05d5-4a5d-8e3b-596cf44d0713\") " Feb 23 13:28:23.971049 master-0 kubenswrapper[26474]: I0223 13:28:23.970962 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv" (OuterVolumeSpecName: "kube-api-access-6jggv") pod "4d5d620a-05d5-4a5d-8e3b-596cf44d0713" (UID: "4d5d620a-05d5-4a5d-8e3b-596cf44d0713"). InnerVolumeSpecName "kube-api-access-6jggv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:23.992964 master-0 kubenswrapper[26474]: I0223 13:28:23.992879 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d5d620a-05d5-4a5d-8e3b-596cf44d0713" (UID: "4d5d620a-05d5-4a5d-8e3b-596cf44d0713"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:24.014046 master-0 kubenswrapper[26474]: I0223 13:28:24.013975 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data" (OuterVolumeSpecName: "config-data") pod "4d5d620a-05d5-4a5d-8e3b-596cf44d0713" (UID: "4d5d620a-05d5-4a5d-8e3b-596cf44d0713"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:24.070688 master-0 kubenswrapper[26474]: I0223 13:28:24.070613 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:24.070688 master-0 kubenswrapper[26474]: I0223 13:28:24.070671 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:24.070937 master-0 kubenswrapper[26474]: I0223 13:28:24.070700 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jggv\" (UniqueName: \"kubernetes.io/projected/4d5d620a-05d5-4a5d-8e3b-596cf44d0713-kube-api-access-6jggv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:24.443498 master-0 kubenswrapper[26474]: I0223 13:28:24.443440 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zkbd6" event={"ID":"4d5d620a-05d5-4a5d-8e3b-596cf44d0713","Type":"ContainerDied","Data":"177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac"} Feb 23 13:28:24.443685 master-0 kubenswrapper[26474]: I0223 13:28:24.443506 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177bd0cefbb4cad210ad433be2f56c938d1baed3d1c12af47e583bfbe20909ac" Feb 23 13:28:24.443685 master-0 kubenswrapper[26474]: I0223 13:28:24.443543 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zkbd6" Feb 23 13:28:24.790373 master-0 kubenswrapper[26474]: I0223 13:28:24.787288 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q77rc"] Feb 23 13:28:24.790373 master-0 kubenswrapper[26474]: E0223 13:28:24.787833 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5d620a-05d5-4a5d-8e3b-596cf44d0713" containerName="keystone-db-sync" Feb 23 13:28:24.790373 master-0 kubenswrapper[26474]: I0223 13:28:24.787847 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5d620a-05d5-4a5d-8e3b-596cf44d0713" containerName="keystone-db-sync" Feb 23 13:28:24.790373 master-0 kubenswrapper[26474]: I0223 13:28:24.788877 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5d620a-05d5-4a5d-8e3b-596cf44d0713" containerName="keystone-db-sync" Feb 23 13:28:24.790373 master-0 kubenswrapper[26474]: I0223 13:28:24.789662 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.797767 master-0 kubenswrapper[26474]: I0223 13:28:24.796867 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:28:24.797767 master-0 kubenswrapper[26474]: I0223 13:28:24.797114 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 13:28:24.797767 master-0 kubenswrapper[26474]: I0223 13:28:24.797275 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:28:24.797767 master-0 kubenswrapper[26474]: I0223 13:28:24.797495 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:28:24.822368 master-0 kubenswrapper[26474]: I0223 13:28:24.817529 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q77rc"] Feb 23 13:28:24.859384 master-0 kubenswrapper[26474]: I0223 13:28:24.851949 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:24.859384 master-0 kubenswrapper[26474]: I0223 13:28:24.852372 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="dnsmasq-dns" containerID="cri-o://17dff80672e1328e04f7b2504e9d2f193866400ca5059cb4856696c2eef5fd7b" gracePeriod=10 Feb 23 13:28:24.859384 master-0 kubenswrapper[26474]: I0223 13:28:24.854731 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:24.903787 master-0 kubenswrapper[26474]: I0223 13:28:24.902696 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:24.927807 master-0 kubenswrapper[26474]: I0223 13:28:24.924912 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:24.927807 master-0 kubenswrapper[26474]: I0223 13:28:24.925041 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.931581 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.931635 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt9nv\" (UniqueName: \"kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.934755 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.934944 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.935064 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:24.946363 master-0 kubenswrapper[26474]: I0223 13:28:24.935094 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.046691 master-0 kubenswrapper[26474]: I0223 13:28:25.046634 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-smgfj"] Feb 23 13:28:25.064969 master-0 kubenswrapper[26474]: I0223 13:28:25.056504 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.074479 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.074710 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.074771 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075181 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075233 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt9nv\" (UniqueName: \"kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075290 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075411 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtr6\" (UniqueName: \"kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075451 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075515 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075689 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075824 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.075980 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.100143 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.118656 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.119112 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-db-sync-fnmxd"] Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.121874 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.122448 master-0 kubenswrapper[26474]: I0223 13:28:25.122282 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.126833 master-0 kubenswrapper[26474]: I0223 13:28:25.122790 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt9nv\" (UniqueName: \"kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.126833 master-0 kubenswrapper[26474]: I0223 13:28:25.124143 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle\") pod \"keystone-bootstrap-q77rc\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.126833 master-0 kubenswrapper[26474]: I0223 13:28:25.124390 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.129212 master-0 kubenswrapper[26474]: I0223 13:28:25.126949 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-scripts" Feb 23 13:28:25.129212 master-0 kubenswrapper[26474]: I0223 13:28:25.127302 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-config-data" Feb 23 13:28:25.210620 master-0 kubenswrapper[26474]: I0223 13:28:25.210562 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.219293 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.219402 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtr6\" (UniqueName: \"kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.219441 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.219481 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.219553 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvzd\" (UniqueName: \"kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.220266 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.220481 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.220543 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.220616 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.221755 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.221809 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222162 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222189 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222305 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222304 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222490 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6pnb\" (UniqueName: \"kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222547 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.222572 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.223183 master-0 kubenswrapper[26474]: I0223 13:28:25.223120 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.268494 master-0 kubenswrapper[26474]: I0223 13:28:25.268324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtr6\" (UniqueName: \"kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6\") pod \"dnsmasq-dns-fdc59b98f-xpl4l\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.285272 master-0 kubenswrapper[26474]: I0223 13:28:25.285219 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-smgfj"] Feb 23 13:28:25.305142 master-0 kubenswrapper[26474]: I0223 13:28:25.305080 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kqjnt"] Feb 23 13:28:25.308562 master-0 kubenswrapper[26474]: I0223 13:28:25.306445 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-db-sync-fnmxd"] Feb 23 13:28:25.308562 master-0 kubenswrapper[26474]: I0223 13:28:25.306528 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.309043 master-0 kubenswrapper[26474]: I0223 13:28:25.309022 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 13:28:25.309197 master-0 kubenswrapper[26474]: I0223 13:28:25.309183 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 13:28:25.327543 master-0 kubenswrapper[26474]: I0223 13:28:25.327484 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kqjnt"] Feb 23 13:28:25.328788 master-0 kubenswrapper[26474]: I0223 13:28:25.328716 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.328870 master-0 kubenswrapper[26474]: I0223 13:28:25.328831 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6pnb\" (UniqueName: \"kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.328870 master-0 kubenswrapper[26474]: I0223 13:28:25.328861 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.329021 master-0 kubenswrapper[26474]: I0223 13:28:25.328879 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.329021 master-0 kubenswrapper[26474]: I0223 13:28:25.328909 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.329021 master-0 kubenswrapper[26474]: I0223 13:28:25.328937 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.329021 master-0 kubenswrapper[26474]: I0223 13:28:25.328982 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xmv\" (UniqueName: \"kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.329021 master-0 kubenswrapper[26474]: I0223 13:28:25.328999 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.329445 master-0 kubenswrapper[26474]: I0223 13:28:25.329034 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvzd\" (UniqueName: \"kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.329445 master-0 kubenswrapper[26474]: I0223 13:28:25.329066 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.329445 master-0 kubenswrapper[26474]: I0223 13:28:25.329097 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.329445 master-0 kubenswrapper[26474]: I0223 13:28:25.329228 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.330192 master-0 kubenswrapper[26474]: I0223 13:28:25.330157 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.336719 master-0 kubenswrapper[26474]: I0223 13:28:25.336662 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.338005 master-0 kubenswrapper[26474]: I0223 13:28:25.337967 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.340963 master-0 kubenswrapper[26474]: I0223 13:28:25.340907 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:25.341052 master-0 kubenswrapper[26474]: I0223 13:28:25.341027 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.341859 master-0 kubenswrapper[26474]: I0223 13:28:25.341808 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.354694 master-0 kubenswrapper[26474]: I0223 13:28:25.354629 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvzd\" (UniqueName: \"kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd\") pod \"cinder-083a9-db-sync-fnmxd\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.363579 master-0 kubenswrapper[26474]: I0223 13:28:25.359966 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6pnb\" (UniqueName: \"kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb\") pod \"ironic-db-create-smgfj\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.430099 master-0 kubenswrapper[26474]: I0223 13:28:25.430001 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:25.432863 master-0 kubenswrapper[26474]: I0223 13:28:25.430877 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.432863 master-0 kubenswrapper[26474]: I0223 13:28:25.430976 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xmv\" (UniqueName: \"kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.432863 master-0 kubenswrapper[26474]: I0223 13:28:25.430997 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.440657 master-0 kubenswrapper[26474]: I0223 13:28:25.437551 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.441965 master-0 kubenswrapper[26474]: I0223 13:28:25.441841 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.454441 master-0 kubenswrapper[26474]: I0223 13:28:25.448110 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xmv\" (UniqueName: \"kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv\") pod \"neutron-db-sync-kqjnt\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.454441 master-0 kubenswrapper[26474]: I0223 13:28:25.453634 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-dc5b-account-create-update-t78mz"] Feb 23 13:28:25.456626 master-0 kubenswrapper[26474]: I0223 13:28:25.455134 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.467369 master-0 kubenswrapper[26474]: I0223 13:28:25.465609 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 23 13:28:25.531374 master-0 kubenswrapper[26474]: I0223 13:28:25.528877 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-dc5b-account-create-update-t78mz"] Feb 23 13:28:25.534513 master-0 kubenswrapper[26474]: I0223 13:28:25.533203 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxq7g\" (UniqueName: \"kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.534513 master-0 kubenswrapper[26474]: I0223 13:28:25.533744 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.549685 master-0 kubenswrapper[26474]: I0223 13:28:25.539803 26474 generic.go:334] "Generic (PLEG): container finished" podID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerID="17dff80672e1328e04f7b2504e9d2f193866400ca5059cb4856696c2eef5fd7b" exitCode=0 Feb 23 13:28:25.549685 master-0 kubenswrapper[26474]: I0223 13:28:25.539866 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" event={"ID":"38e1286f-c726-4775-9da2-acb19ace1a0f","Type":"ContainerDied","Data":"17dff80672e1328e04f7b2504e9d2f193866400ca5059cb4856696c2eef5fd7b"} Feb 23 13:28:25.554555 master-0 kubenswrapper[26474]: I0223 13:28:25.552988 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:25.565240 master-0 kubenswrapper[26474]: I0223 13:28:25.565161 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jwdv9"] Feb 23 13:28:25.570487 master-0 kubenswrapper[26474]: I0223 13:28:25.570354 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.574090 master-0 kubenswrapper[26474]: I0223 13:28:25.574041 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 13:28:25.574283 master-0 kubenswrapper[26474]: I0223 13:28:25.574263 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 13:28:25.583298 master-0 kubenswrapper[26474]: I0223 13:28:25.583229 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jwdv9"] Feb 23 13:28:25.593823 master-0 kubenswrapper[26474]: I0223 13:28:25.593759 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:25.604649 master-0 kubenswrapper[26474]: I0223 13:28:25.604592 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:28:25.606823 master-0 kubenswrapper[26474]: I0223 13:28:25.606766 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.621846 master-0 kubenswrapper[26474]: I0223 13:28:25.621777 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:28:25.628469 master-0 kubenswrapper[26474]: I0223 13:28:25.628420 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:25.634484 master-0 kubenswrapper[26474]: I0223 13:28:25.634410 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634484 master-0 kubenswrapper[26474]: I0223 13:28:25.634463 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634695 master-0 kubenswrapper[26474]: I0223 13:28:25.634507 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tmd\" (UniqueName: \"kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634695 master-0 kubenswrapper[26474]: I0223 13:28:25.634558 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634695 master-0 kubenswrapper[26474]: I0223 13:28:25.634596 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634695 master-0 kubenswrapper[26474]: I0223 13:28:25.634643 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc\") pod \"38e1286f-c726-4775-9da2-acb19ace1a0f\" (UID: \"38e1286f-c726-4775-9da2-acb19ace1a0f\") " Feb 23 13:28:25.634897 master-0 kubenswrapper[26474]: I0223 13:28:25.634770 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.634897 master-0 kubenswrapper[26474]: I0223 13:28:25.634797 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.634897 master-0 kubenswrapper[26474]: I0223 13:28:25.634829 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.634897 master-0 kubenswrapper[26474]: I0223 13:28:25.634864 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxq7g\" (UniqueName: \"kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.634897 master-0 kubenswrapper[26474]: I0223 13:28:25.634883 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5h8q\" (UniqueName: \"kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.634906 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.634931 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.634951 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.634972 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.635029 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxqb\" (UniqueName: \"kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.635074 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.635103 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.635145 master-0 kubenswrapper[26474]: I0223 13:28:25.635122 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.636307 master-0 kubenswrapper[26474]: I0223 13:28:25.636213 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.644418 master-0 kubenswrapper[26474]: I0223 13:28:25.644294 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd" (OuterVolumeSpecName: "kube-api-access-54tmd") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "kube-api-access-54tmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:25.679236 master-0 kubenswrapper[26474]: I0223 13:28:25.679160 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxq7g\" (UniqueName: \"kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g\") pod \"ironic-dc5b-account-create-update-t78mz\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.703364 master-0 kubenswrapper[26474]: I0223 13:28:25.701673 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:25.710625 master-0 kubenswrapper[26474]: I0223 13:28:25.710573 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config" (OuterVolumeSpecName: "config") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:25.726959 master-0 kubenswrapper[26474]: I0223 13:28:25.725104 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:25.726959 master-0 kubenswrapper[26474]: I0223 13:28:25.726077 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:25.738968 master-0 kubenswrapper[26474]: I0223 13:28:25.738919 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739165 master-0 kubenswrapper[26474]: I0223 13:28:25.738974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.739165 master-0 kubenswrapper[26474]: I0223 13:28:25.739037 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739165 master-0 kubenswrapper[26474]: I0223 13:28:25.739087 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.739165 master-0 kubenswrapper[26474]: I0223 13:28:25.739120 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.739497 master-0 kubenswrapper[26474]: I0223 13:28:25.739165 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5h8q\" (UniqueName: \"kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739497 master-0 kubenswrapper[26474]: I0223 13:28:25.739190 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739497 master-0 kubenswrapper[26474]: I0223 13:28:25.739446 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.739497 master-0 kubenswrapper[26474]: I0223 13:28:25.739472 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739615 master-0 kubenswrapper[26474]: I0223 13:28:25.739546 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcxqb\" (UniqueName: \"kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.739615 master-0 kubenswrapper[26474]: I0223 13:28:25.739565 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.739674 master-0 kubenswrapper[26474]: I0223 13:28:25.739617 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.739674 master-0 kubenswrapper[26474]: I0223 13:28:25.739630 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.739674 master-0 kubenswrapper[26474]: I0223 13:28:25.739642 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.739674 master-0 kubenswrapper[26474]: I0223 13:28:25.739653 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tmd\" (UniqueName: \"kubernetes.io/projected/38e1286f-c726-4775-9da2-acb19ace1a0f-kube-api-access-54tmd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.739674 master-0 kubenswrapper[26474]: I0223 13:28:25.739663 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.740313 master-0 kubenswrapper[26474]: I0223 13:28:25.740273 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.740467 master-0 kubenswrapper[26474]: I0223 13:28:25.740445 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.741115 master-0 kubenswrapper[26474]: I0223 13:28:25.740777 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.741569 master-0 kubenswrapper[26474]: I0223 13:28:25.741520 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.743507 master-0 kubenswrapper[26474]: I0223 13:28:25.743463 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.743927 master-0 kubenswrapper[26474]: I0223 13:28:25.743901 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.744431 master-0 kubenswrapper[26474]: I0223 13:28:25.744389 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.745996 master-0 kubenswrapper[26474]: I0223 13:28:25.745240 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:28:25.750933 master-0 kubenswrapper[26474]: I0223 13:28:25.747299 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.760210 master-0 kubenswrapper[26474]: I0223 13:28:25.752494 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.760210 master-0 kubenswrapper[26474]: I0223 13:28:25.759615 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcxqb\" (UniqueName: \"kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb\") pod \"placement-db-sync-jwdv9\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:25.769941 master-0 kubenswrapper[26474]: I0223 13:28:25.769874 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5h8q\" (UniqueName: \"kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q\") pod \"dnsmasq-dns-7544b46fd7-pt6kw\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:25.778049 master-0 kubenswrapper[26474]: I0223 13:28:25.777994 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38e1286f-c726-4775-9da2-acb19ace1a0f" (UID: "38e1286f-c726-4775-9da2-acb19ace1a0f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:25.786985 master-0 kubenswrapper[26474]: I0223 13:28:25.786854 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:25.864243 master-0 kubenswrapper[26474]: I0223 13:28:25.863911 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38e1286f-c726-4775-9da2-acb19ace1a0f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:25.961412 master-0 kubenswrapper[26474]: I0223 13:28:25.959445 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:26.007095 master-0 kubenswrapper[26474]: I0223 13:28:26.001894 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:26.060177 master-0 kubenswrapper[26474]: I0223 13:28:26.060133 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q77rc"] Feb 23 13:28:26.327137 master-0 kubenswrapper[26474]: I0223 13:28:26.324578 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-smgfj"] Feb 23 13:28:26.342919 master-0 kubenswrapper[26474]: I0223 13:28:26.342824 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:26.383582 master-0 kubenswrapper[26474]: I0223 13:28:26.383455 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-db-sync-fnmxd"] Feb 23 13:28:26.575901 master-0 kubenswrapper[26474]: I0223 13:28:26.575769 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-smgfj" event={"ID":"323323e6-77db-4dbe-b877-c3875b42211a","Type":"ContainerStarted","Data":"81e9fd4aa5d1f965536c9857fc0a6aa9ff87873823ccbe6969e347b33f920085"} Feb 23 13:28:26.588665 master-0 kubenswrapper[26474]: I0223 13:28:26.585512 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" event={"ID":"38e1286f-c726-4775-9da2-acb19ace1a0f","Type":"ContainerDied","Data":"b5a04d8fa089a0a24550075d2095cbb3993aca8884c5f3551d158ecf7603e676"} Feb 23 13:28:26.588665 master-0 kubenswrapper[26474]: I0223 13:28:26.585589 26474 scope.go:117] "RemoveContainer" containerID="17dff80672e1328e04f7b2504e9d2f193866400ca5059cb4856696c2eef5fd7b" Feb 23 13:28:26.588665 master-0 kubenswrapper[26474]: I0223 13:28:26.585753 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb5d646f-ncmzp" Feb 23 13:28:26.589841 master-0 kubenswrapper[26474]: I0223 13:28:26.589772 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-db-sync-fnmxd" event={"ID":"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b","Type":"ContainerStarted","Data":"4282ef04d32e27a52e2b427b82337211c15df51cd069147de2452b7989ee3bb3"} Feb 23 13:28:26.591668 master-0 kubenswrapper[26474]: I0223 13:28:26.591633 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q77rc" event={"ID":"e3a9da5f-fbc1-4acf-8faf-222482a33f75","Type":"ContainerStarted","Data":"488142c67083ba34aa4e79e5c7b754c28e19007162aad299739d3f60cdc9a1d9"} Feb 23 13:28:26.594653 master-0 kubenswrapper[26474]: I0223 13:28:26.594601 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" event={"ID":"c0a210ce-25c8-408e-91b7-6bb481931b8e","Type":"ContainerStarted","Data":"9df89252e2b1b2dc1e45886702406a97e473225d0d1806151fccb08f45ad1548"} Feb 23 13:28:26.622586 master-0 kubenswrapper[26474]: I0223 13:28:26.622062 26474 scope.go:117] "RemoveContainer" containerID="3d2beddc4176de4160301ef44f20241525cae5a42e8720e668be27a67a89d2ac" Feb 23 13:28:26.671473 master-0 kubenswrapper[26474]: I0223 13:28:26.671404 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:26.704921 master-0 kubenswrapper[26474]: I0223 13:28:26.704840 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ffb5d646f-ncmzp"] Feb 23 13:28:26.929782 master-0 kubenswrapper[26474]: I0223 13:28:26.929707 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:26.930521 master-0 kubenswrapper[26474]: E0223 13:28:26.930445 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="dnsmasq-dns" Feb 23 13:28:26.931518 master-0 kubenswrapper[26474]: I0223 13:28:26.931450 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="dnsmasq-dns" Feb 23 13:28:26.931612 master-0 kubenswrapper[26474]: E0223 13:28:26.931551 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="init" Feb 23 13:28:26.931612 master-0 kubenswrapper[26474]: I0223 13:28:26.931565 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="init" Feb 23 13:28:26.932394 master-0 kubenswrapper[26474]: I0223 13:28:26.932228 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" containerName="dnsmasq-dns" Feb 23 13:28:26.934116 master-0 kubenswrapper[26474]: I0223 13:28:26.933700 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.947510 master-0 kubenswrapper[26474]: I0223 13:28:26.947387 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 13:28:26.947731 master-0 kubenswrapper[26474]: I0223 13:28:26.947721 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-default-external-config-data" Feb 23 13:28:26.947911 master-0 kubenswrapper[26474]: I0223 13:28:26.947888 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:28:26.994660 master-0 kubenswrapper[26474]: I0223 13:28:26.991293 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kqjnt"] Feb 23 13:28:26.994660 master-0 kubenswrapper[26474]: I0223 13:28:26.994640 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.994660 master-0 kubenswrapper[26474]: I0223 13:28:26.994829 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.994921 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.995117 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.995170 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxzc\" (UniqueName: \"kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.995194 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.995299 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:26.995733 master-0 kubenswrapper[26474]: I0223 13:28:26.995353 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.001752 master-0 kubenswrapper[26474]: I0223 13:28:27.001635 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-dc5b-account-create-update-t78mz"] Feb 23 13:28:27.027584 master-0 kubenswrapper[26474]: I0223 13:28:27.027521 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:27.039558 master-0 kubenswrapper[26474]: I0223 13:28:27.039503 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:28:27.098812 master-0 kubenswrapper[26474]: I0223 13:28:27.098749 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jwdv9"] Feb 23 13:28:27.104761 master-0 kubenswrapper[26474]: I0223 13:28:27.104629 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.105121 master-0 kubenswrapper[26474]: I0223 13:28:27.105106 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.105229 master-0 kubenswrapper[26474]: I0223 13:28:27.105209 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxzc\" (UniqueName: \"kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.105309 master-0 kubenswrapper[26474]: I0223 13:28:27.105295 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.105653 master-0 kubenswrapper[26474]: I0223 13:28:27.105620 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.106266 master-0 kubenswrapper[26474]: I0223 13:28:27.106216 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.106324 master-0 kubenswrapper[26474]: I0223 13:28:27.106274 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.106479 master-0 kubenswrapper[26474]: I0223 13:28:27.106445 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.106520 master-0 kubenswrapper[26474]: I0223 13:28:27.106478 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.108207 master-0 kubenswrapper[26474]: I0223 13:28:27.108180 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.110730 master-0 kubenswrapper[26474]: I0223 13:28:27.110707 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.111626 master-0 kubenswrapper[26474]: I0223 13:28:27.111588 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.113206 master-0 kubenswrapper[26474]: I0223 13:28:27.113150 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:28:27.113206 master-0 kubenswrapper[26474]: I0223 13:28:27.113182 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1725139a53e08998fd353fef3de63f88daa57a9b0265ed3b7798973e2d5ac396/globalmount\"" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.113796 master-0 kubenswrapper[26474]: I0223 13:28:27.113760 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.114608 master-0 kubenswrapper[26474]: I0223 13:28:27.114575 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.123964 master-0 kubenswrapper[26474]: I0223 13:28:27.123935 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxzc\" (UniqueName: \"kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:27.612970 master-0 kubenswrapper[26474]: I0223 13:28:27.612435 26474 generic.go:334] "Generic (PLEG): container finished" podID="323323e6-77db-4dbe-b877-c3875b42211a" containerID="6ac3150aaa5459a665baea737519ee445f4ee4348e87f9054792d4dbf949c11a" exitCode=0 Feb 23 13:28:27.612970 master-0 kubenswrapper[26474]: I0223 13:28:27.612508 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-smgfj" event={"ID":"323323e6-77db-4dbe-b877-c3875b42211a","Type":"ContainerDied","Data":"6ac3150aaa5459a665baea737519ee445f4ee4348e87f9054792d4dbf949c11a"} Feb 23 13:28:27.615561 master-0 kubenswrapper[26474]: I0223 13:28:27.615482 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dc5b-account-create-update-t78mz" event={"ID":"5a9ec263-54d6-4981-9127-e9bd62d1cf7d","Type":"ContainerStarted","Data":"7b2da3601c5b7e5e4d8df48c98d082a05db7cec1946e464dfd133aee3377e71e"} Feb 23 13:28:27.615667 master-0 kubenswrapper[26474]: I0223 13:28:27.615572 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dc5b-account-create-update-t78mz" event={"ID":"5a9ec263-54d6-4981-9127-e9bd62d1cf7d","Type":"ContainerStarted","Data":"e715e0b0181893dff4899a5e1aef2fdf1c742c66c89f4f953fdc8632bf7ceeb0"} Feb 23 13:28:27.617867 master-0 kubenswrapper[26474]: I0223 13:28:27.617285 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kqjnt" event={"ID":"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3","Type":"ContainerStarted","Data":"8dea65238a595dad8c07581919d15fc25d205b3b4aaeeb666eb74621aac64f66"} Feb 23 13:28:27.617867 master-0 kubenswrapper[26474]: I0223 13:28:27.617327 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kqjnt" event={"ID":"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3","Type":"ContainerStarted","Data":"4b291f8e6d9bae852675bc4ffc81be22921c6d6e75952b90fe4a104f0ba89a21"} Feb 23 13:28:27.621709 master-0 kubenswrapper[26474]: I0223 13:28:27.621117 26474 generic.go:334] "Generic (PLEG): container finished" podID="8609ce54-234c-4673-a9d6-14855102d116" containerID="20e1e8edcd43ae2b632d5fec61a5cbff24fa01fe2d9b45a36f77345cac29460f" exitCode=0 Feb 23 13:28:27.621709 master-0 kubenswrapper[26474]: I0223 13:28:27.621210 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" event={"ID":"8609ce54-234c-4673-a9d6-14855102d116","Type":"ContainerDied","Data":"20e1e8edcd43ae2b632d5fec61a5cbff24fa01fe2d9b45a36f77345cac29460f"} Feb 23 13:28:27.621709 master-0 kubenswrapper[26474]: I0223 13:28:27.621235 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" event={"ID":"8609ce54-234c-4673-a9d6-14855102d116","Type":"ContainerStarted","Data":"4b842e019ab10f1ab12c939f8406a79ff134d9102af4ff3ab3135701709405d6"} Feb 23 13:28:27.622446 master-0 kubenswrapper[26474]: I0223 13:28:27.622392 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwdv9" event={"ID":"51bc3c75-4dd4-4b8d-8fab-9035be69b72d","Type":"ContainerStarted","Data":"41812ec43a22c7015f9a439d96378db6e9b99e58eddfcdf80af70325b3b799bc"} Feb 23 13:28:27.626421 master-0 kubenswrapper[26474]: I0223 13:28:27.626252 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q77rc" event={"ID":"e3a9da5f-fbc1-4acf-8faf-222482a33f75","Type":"ContainerStarted","Data":"ccabe7e44c803917cdf2273fbfb9b5d2534d44f374144b586f0bb64313d108e9"} Feb 23 13:28:27.631127 master-0 kubenswrapper[26474]: I0223 13:28:27.631061 26474 generic.go:334] "Generic (PLEG): container finished" podID="c0a210ce-25c8-408e-91b7-6bb481931b8e" containerID="6c7d0b2efa43d775b3a7d45b87655dc91f06e039f01bf98557d53649d68d4263" exitCode=0 Feb 23 13:28:27.631269 master-0 kubenswrapper[26474]: I0223 13:28:27.631233 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" event={"ID":"c0a210ce-25c8-408e-91b7-6bb481931b8e","Type":"ContainerDied","Data":"6c7d0b2efa43d775b3a7d45b87655dc91f06e039f01bf98557d53649d68d4263"} Feb 23 13:28:27.671449 master-0 kubenswrapper[26474]: I0223 13:28:27.671355 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kqjnt" podStartSLOduration=3.671314432 podStartE2EDuration="3.671314432s" podCreationTimestamp="2026-02-23 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:27.666013902 +0000 UTC m=+829.512521579" watchObservedRunningTime="2026-02-23 13:28:27.671314432 +0000 UTC m=+829.517822109" Feb 23 13:28:27.759196 master-0 kubenswrapper[26474]: I0223 13:28:27.759094 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q77rc" podStartSLOduration=3.759071542 podStartE2EDuration="3.759071542s" podCreationTimestamp="2026-02-23 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:27.715498099 +0000 UTC m=+829.562005776" watchObservedRunningTime="2026-02-23 13:28:27.759071542 +0000 UTC m=+829.605579219" Feb 23 13:28:27.854414 master-0 kubenswrapper[26474]: I0223 13:28:27.834622 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-dc5b-account-create-update-t78mz" podStartSLOduration=2.834595353 podStartE2EDuration="2.834595353s" podCreationTimestamp="2026-02-23 13:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:27.73975429 +0000 UTC m=+829.586261967" watchObservedRunningTime="2026-02-23 13:28:27.834595353 +0000 UTC m=+829.681103030" Feb 23 13:28:28.091692 master-0 kubenswrapper[26474]: I0223 13:28:28.091600 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:28.093470 master-0 kubenswrapper[26474]: E0223 13:28:28.093391 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-4fec4-default-external-api-0" podUID="4ca22878-e3cc-4177-9633-c79105d41fab" Feb 23 13:28:28.127969 master-0 kubenswrapper[26474]: I0223 13:28:28.127327 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:28:28.130668 master-0 kubenswrapper[26474]: I0223 13:28:28.130517 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:28:28.130668 master-0 kubenswrapper[26474]: I0223 13:28:28.130663 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.134072 master-0 kubenswrapper[26474]: I0223 13:28:28.134041 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 13:28:28.134612 master-0 kubenswrapper[26474]: I0223 13:28:28.134597 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-default-internal-config-data" Feb 23 13:28:28.239059 master-0 kubenswrapper[26474]: I0223 13:28:28.238985 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.239616 master-0 kubenswrapper[26474]: I0223 13:28:28.239352 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.239616 master-0 kubenswrapper[26474]: I0223 13:28:28.239530 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.240283 master-0 kubenswrapper[26474]: I0223 13:28:28.239922 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.240283 master-0 kubenswrapper[26474]: I0223 13:28:28.240025 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.240283 master-0 kubenswrapper[26474]: I0223 13:28:28.240053 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dzqh\" (UniqueName: \"kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.240283 master-0 kubenswrapper[26474]: I0223 13:28:28.240117 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.240283 master-0 kubenswrapper[26474]: I0223 13:28:28.240154 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.336493 master-0 kubenswrapper[26474]: I0223 13:28:28.336450 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:28.342985 master-0 kubenswrapper[26474]: I0223 13:28:28.342885 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.342985 master-0 kubenswrapper[26474]: I0223 13:28:28.342974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343009 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343507 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343539 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dzqh\" (UniqueName: \"kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343580 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343609 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.343853 master-0 kubenswrapper[26474]: I0223 13:28:28.343674 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.344795 master-0 kubenswrapper[26474]: I0223 13:28:28.344149 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.346481 master-0 kubenswrapper[26474]: I0223 13:28:28.345399 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.347394 master-0 kubenswrapper[26474]: I0223 13:28:28.347284 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.350733 master-0 kubenswrapper[26474]: I0223 13:28:28.350481 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:28:28.350733 master-0 kubenswrapper[26474]: I0223 13:28:28.350506 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0e6f58ed848c8ffad14a494b64a381134d4f7ca50c612d6ade3ceaa3a801c011/globalmount\"" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.355648 master-0 kubenswrapper[26474]: I0223 13:28:28.355610 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.361290 master-0 kubenswrapper[26474]: I0223 13:28:28.361250 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.369396 master-0 kubenswrapper[26474]: I0223 13:28:28.368063 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dzqh\" (UniqueName: \"kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.404243 master-0 kubenswrapper[26474]: I0223 13:28:28.404199 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:28.443433 master-0 kubenswrapper[26474]: I0223 13:28:28.443373 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e1286f-c726-4775-9da2-acb19ace1a0f" path="/var/lib/kubelet/pods/38e1286f-c726-4775-9da2-acb19ace1a0f/volumes" Feb 23 13:28:28.450325 master-0 kubenswrapper[26474]: I0223 13:28:28.450292 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.450748 master-0 kubenswrapper[26474]: I0223 13:28:28.450730 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtr6\" (UniqueName: \"kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.451844 master-0 kubenswrapper[26474]: I0223 13:28:28.451826 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.451957 master-0 kubenswrapper[26474]: I0223 13:28:28.451945 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.452321 master-0 kubenswrapper[26474]: I0223 13:28:28.452305 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.452636 master-0 kubenswrapper[26474]: I0223 13:28:28.452623 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config\") pod \"c0a210ce-25c8-408e-91b7-6bb481931b8e\" (UID: \"c0a210ce-25c8-408e-91b7-6bb481931b8e\") " Feb 23 13:28:28.455256 master-0 kubenswrapper[26474]: I0223 13:28:28.455235 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6" (OuterVolumeSpecName: "kube-api-access-cqtr6") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "kube-api-access-cqtr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:28.485898 master-0 kubenswrapper[26474]: I0223 13:28:28.485827 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:28.491835 master-0 kubenswrapper[26474]: I0223 13:28:28.491801 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:28.491971 master-0 kubenswrapper[26474]: I0223 13:28:28.491929 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:28.527494 master-0 kubenswrapper[26474]: I0223 13:28:28.527426 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config" (OuterVolumeSpecName: "config") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:28.528169 master-0 kubenswrapper[26474]: I0223 13:28:28.528113 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0a210ce-25c8-408e-91b7-6bb481931b8e" (UID: "c0a210ce-25c8-408e-91b7-6bb481931b8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:28.558592 master-0 kubenswrapper[26474]: I0223 13:28:28.558135 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.558933 master-0 kubenswrapper[26474]: I0223 13:28:28.558849 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.558933 master-0 kubenswrapper[26474]: I0223 13:28:28.558895 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.559043 master-0 kubenswrapper[26474]: I0223 13:28:28.558970 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.559043 master-0 kubenswrapper[26474]: I0223 13:28:28.558991 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0a210ce-25c8-408e-91b7-6bb481931b8e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.559043 master-0 kubenswrapper[26474]: I0223 13:28:28.559006 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtr6\" (UniqueName: \"kubernetes.io/projected/c0a210ce-25c8-408e-91b7-6bb481931b8e-kube-api-access-cqtr6\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.571555 master-0 kubenswrapper[26474]: I0223 13:28:28.571461 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:28.659759 master-0 kubenswrapper[26474]: I0223 13:28:28.659650 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" event={"ID":"8609ce54-234c-4673-a9d6-14855102d116","Type":"ContainerStarted","Data":"a1c0df0bb0508bdcba6fbeb32b39cd79503267c8352228073e3cf7ff94c6155a"} Feb 23 13:28:28.659955 master-0 kubenswrapper[26474]: I0223 13:28:28.659841 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:28.663730 master-0 kubenswrapper[26474]: I0223 13:28:28.663680 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" event={"ID":"c0a210ce-25c8-408e-91b7-6bb481931b8e","Type":"ContainerDied","Data":"9df89252e2b1b2dc1e45886702406a97e473225d0d1806151fccb08f45ad1548"} Feb 23 13:28:28.663905 master-0 kubenswrapper[26474]: I0223 13:28:28.663742 26474 scope.go:117] "RemoveContainer" containerID="6c7d0b2efa43d775b3a7d45b87655dc91f06e039f01bf98557d53649d68d4263" Feb 23 13:28:28.663905 master-0 kubenswrapper[26474]: I0223 13:28:28.663699 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fdc59b98f-xpl4l" Feb 23 13:28:28.673957 master-0 kubenswrapper[26474]: I0223 13:28:28.673558 26474 generic.go:334] "Generic (PLEG): container finished" podID="5a9ec263-54d6-4981-9127-e9bd62d1cf7d" containerID="7b2da3601c5b7e5e4d8df48c98d082a05db7cec1946e464dfd133aee3377e71e" exitCode=0 Feb 23 13:28:28.673957 master-0 kubenswrapper[26474]: I0223 13:28:28.673677 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:28.673957 master-0 kubenswrapper[26474]: I0223 13:28:28.673807 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dc5b-account-create-update-t78mz" event={"ID":"5a9ec263-54d6-4981-9127-e9bd62d1cf7d","Type":"ContainerDied","Data":"7b2da3601c5b7e5e4d8df48c98d082a05db7cec1946e464dfd133aee3377e71e"} Feb 23 13:28:28.689003 master-0 kubenswrapper[26474]: I0223 13:28:28.688841 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780278 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780370 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxzc\" (UniqueName: \"kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780508 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780550 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780622 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780645 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.782367 master-0 kubenswrapper[26474]: I0223 13:28:28.780764 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.800207 master-0 kubenswrapper[26474]: I0223 13:28:28.797707 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:28.800207 master-0 kubenswrapper[26474]: I0223 13:28:28.797936 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs" (OuterVolumeSpecName: "logs") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:28.800821 master-0 kubenswrapper[26474]: I0223 13:28:28.800733 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:28.800821 master-0 kubenswrapper[26474]: I0223 13:28:28.800756 26474 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.800821 master-0 kubenswrapper[26474]: I0223 13:28:28.800794 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ca22878-e3cc-4177-9633-c79105d41fab-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.800821 master-0 kubenswrapper[26474]: I0223 13:28:28.800770 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data" (OuterVolumeSpecName: "config-data") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:28.814378 master-0 kubenswrapper[26474]: I0223 13:28:28.811754 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:28.814378 master-0 kubenswrapper[26474]: I0223 13:28:28.811934 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts" (OuterVolumeSpecName: "scripts") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:28.814378 master-0 kubenswrapper[26474]: I0223 13:28:28.812164 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc" (OuterVolumeSpecName: "kube-api-access-zgxzc") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "kube-api-access-zgxzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:28.904361 master-0 kubenswrapper[26474]: I0223 13:28:28.903786 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"4ca22878-e3cc-4177-9633-c79105d41fab\" (UID: \"4ca22878-e3cc-4177-9633-c79105d41fab\") " Feb 23 13:28:28.919430 master-0 kubenswrapper[26474]: I0223 13:28:28.913168 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.919430 master-0 kubenswrapper[26474]: I0223 13:28:28.913232 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxzc\" (UniqueName: \"kubernetes.io/projected/4ca22878-e3cc-4177-9633-c79105d41fab-kube-api-access-zgxzc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.919430 master-0 kubenswrapper[26474]: I0223 13:28:28.913246 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.919430 master-0 kubenswrapper[26474]: I0223 13:28:28.913255 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.919430 master-0 kubenswrapper[26474]: I0223 13:28:28.913263 26474 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ca22878-e3cc-4177-9633-c79105d41fab-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:28.984362 master-0 kubenswrapper[26474]: I0223 13:28:28.974889 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" podStartSLOduration=3.974870433 podStartE2EDuration="3.974870433s" podCreationTimestamp="2026-02-23 13:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:28.974769031 +0000 UTC m=+830.821276718" watchObservedRunningTime="2026-02-23 13:28:28.974870433 +0000 UTC m=+830.821378110" Feb 23 13:28:29.227761 master-0 kubenswrapper[26474]: I0223 13:28:29.226581 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:29.235268 master-0 kubenswrapper[26474]: I0223 13:28:29.234971 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fdc59b98f-xpl4l"] Feb 23 13:28:29.399896 master-0 kubenswrapper[26474]: I0223 13:28:29.399816 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:29.464112 master-0 kubenswrapper[26474]: I0223 13:28:29.459892 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6pnb\" (UniqueName: \"kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb\") pod \"323323e6-77db-4dbe-b877-c3875b42211a\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " Feb 23 13:28:29.464112 master-0 kubenswrapper[26474]: I0223 13:28:29.460393 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts\") pod \"323323e6-77db-4dbe-b877-c3875b42211a\" (UID: \"323323e6-77db-4dbe-b877-c3875b42211a\") " Feb 23 13:28:29.464112 master-0 kubenswrapper[26474]: I0223 13:28:29.462202 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "323323e6-77db-4dbe-b877-c3875b42211a" (UID: "323323e6-77db-4dbe-b877-c3875b42211a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:29.464112 master-0 kubenswrapper[26474]: I0223 13:28:29.462599 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb" (OuterVolumeSpecName: "kube-api-access-k6pnb") pod "323323e6-77db-4dbe-b877-c3875b42211a" (UID: "323323e6-77db-4dbe-b877-c3875b42211a"). InnerVolumeSpecName "kube-api-access-k6pnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:29.565042 master-0 kubenswrapper[26474]: I0223 13:28:29.564994 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6pnb\" (UniqueName: \"kubernetes.io/projected/323323e6-77db-4dbe-b877-c3875b42211a-kube-api-access-k6pnb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:29.565042 master-0 kubenswrapper[26474]: I0223 13:28:29.565037 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323323e6-77db-4dbe-b877-c3875b42211a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:29.689506 master-0 kubenswrapper[26474]: I0223 13:28:29.689443 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:29.691405 master-0 kubenswrapper[26474]: I0223 13:28:29.690558 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-smgfj" Feb 23 13:28:29.702580 master-0 kubenswrapper[26474]: I0223 13:28:29.702513 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-smgfj" event={"ID":"323323e6-77db-4dbe-b877-c3875b42211a","Type":"ContainerDied","Data":"81e9fd4aa5d1f965536c9857fc0a6aa9ff87873823ccbe6969e347b33f920085"} Feb 23 13:28:29.702648 master-0 kubenswrapper[26474]: I0223 13:28:29.702585 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e9fd4aa5d1f965536c9857fc0a6aa9ff87873823ccbe6969e347b33f920085" Feb 23 13:28:30.049103 master-0 kubenswrapper[26474]: I0223 13:28:30.049024 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929" (OuterVolumeSpecName: "glance") pod "4ca22878-e3cc-4177-9633-c79105d41fab" (UID: "4ca22878-e3cc-4177-9633-c79105d41fab"). InnerVolumeSpecName "pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 13:28:30.055214 master-0 kubenswrapper[26474]: I0223 13:28:30.055091 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:30.077060 master-0 kubenswrapper[26474]: I0223 13:28:30.075571 26474 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") on node \"master-0\" " Feb 23 13:28:30.118917 master-0 kubenswrapper[26474]: I0223 13:28:30.118802 26474 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 13:28:30.119656 master-0 kubenswrapper[26474]: I0223 13:28:30.119090 26474 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98" (UniqueName: "kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929") on node "master-0" Feb 23 13:28:30.178711 master-0 kubenswrapper[26474]: I0223 13:28:30.178324 26474 reconciler_common.go:293] "Volume detached for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:30.305872 master-0 kubenswrapper[26474]: I0223 13:28:30.305715 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:30.376290 master-0 kubenswrapper[26474]: I0223 13:28:30.374988 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:30.387759 master-0 kubenswrapper[26474]: I0223 13:28:30.387432 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.449165 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca22878-e3cc-4177-9633-c79105d41fab" path="/var/lib/kubelet/pods/4ca22878-e3cc-4177-9633-c79105d41fab/volumes" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.449652 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a210ce-25c8-408e-91b7-6bb481931b8e" path="/var/lib/kubelet/pods/c0a210ce-25c8-408e-91b7-6bb481931b8e/volumes" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.450232 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: E0223 13:28:30.450697 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a210ce-25c8-408e-91b7-6bb481931b8e" containerName="init" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.450712 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a210ce-25c8-408e-91b7-6bb481931b8e" containerName="init" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: E0223 13:28:30.450723 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323323e6-77db-4dbe-b877-c3875b42211a" containerName="mariadb-database-create" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.450731 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="323323e6-77db-4dbe-b877-c3875b42211a" containerName="mariadb-database-create" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.450977 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="323323e6-77db-4dbe-b877-c3875b42211a" containerName="mariadb-database-create" Feb 23 13:28:30.452280 master-0 kubenswrapper[26474]: I0223 13:28:30.450991 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a210ce-25c8-408e-91b7-6bb481931b8e" containerName="init" Feb 23 13:28:30.456294 master-0 kubenswrapper[26474]: I0223 13:28:30.455767 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:30.456294 master-0 kubenswrapper[26474]: I0223 13:28:30.456048 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.459008 master-0 kubenswrapper[26474]: I0223 13:28:30.458193 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-default-external-config-data" Feb 23 13:28:30.459008 master-0 kubenswrapper[26474]: I0223 13:28:30.458427 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:28:30.607447 master-0 kubenswrapper[26474]: I0223 13:28:30.607285 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.607755 master-0 kubenswrapper[26474]: I0223 13:28:30.607650 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.608245 master-0 kubenswrapper[26474]: I0223 13:28:30.608218 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.608400 master-0 kubenswrapper[26474]: I0223 13:28:30.608376 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.609676 master-0 kubenswrapper[26474]: I0223 13:28:30.609632 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.610053 master-0 kubenswrapper[26474]: I0223 13:28:30.609689 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.610053 master-0 kubenswrapper[26474]: I0223 13:28:30.609871 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stbh\" (UniqueName: \"kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.610053 master-0 kubenswrapper[26474]: I0223 13:28:30.609902 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.715020 master-0 kubenswrapper[26474]: I0223 13:28:30.713728 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.715407 master-0 kubenswrapper[26474]: I0223 13:28:30.713977 26474 generic.go:334] "Generic (PLEG): container finished" podID="e3a9da5f-fbc1-4acf-8faf-222482a33f75" containerID="ccabe7e44c803917cdf2273fbfb9b5d2534d44f374144b586f0bb64313d108e9" exitCode=0 Feb 23 13:28:30.715407 master-0 kubenswrapper[26474]: I0223 13:28:30.714009 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q77rc" event={"ID":"e3a9da5f-fbc1-4acf-8faf-222482a33f75","Type":"ContainerDied","Data":"ccabe7e44c803917cdf2273fbfb9b5d2534d44f374144b586f0bb64313d108e9"} Feb 23 13:28:30.716031 master-0 kubenswrapper[26474]: I0223 13:28:30.714895 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.716440 master-0 kubenswrapper[26474]: I0223 13:28:30.716403 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.716595 master-0 kubenswrapper[26474]: I0223 13:28:30.716558 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.716595 master-0 kubenswrapper[26474]: I0223 13:28:30.716603 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.716815 master-0 kubenswrapper[26474]: I0223 13:28:30.716705 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.716815 master-0 kubenswrapper[26474]: I0223 13:28:30.716742 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.717048 master-0 kubenswrapper[26474]: I0223 13:28:30.716996 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4stbh\" (UniqueName: \"kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.717124 master-0 kubenswrapper[26474]: I0223 13:28:30.717072 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.724981 master-0 kubenswrapper[26474]: I0223 13:28:30.722016 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.724981 master-0 kubenswrapper[26474]: I0223 13:28:30.722357 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.724981 master-0 kubenswrapper[26474]: I0223 13:28:30.724703 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.726045 master-0 kubenswrapper[26474]: I0223 13:28:30.725575 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.729113 master-0 kubenswrapper[26474]: I0223 13:28:30.727151 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.732505 master-0 kubenswrapper[26474]: I0223 13:28:30.732456 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:28:30.732719 master-0 kubenswrapper[26474]: I0223 13:28:30.732680 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1725139a53e08998fd353fef3de63f88daa57a9b0265ed3b7798973e2d5ac396/globalmount\"" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:30.745457 master-0 kubenswrapper[26474]: I0223 13:28:30.745300 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4stbh\" (UniqueName: \"kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:32.062474 master-0 kubenswrapper[26474]: I0223 13:28:32.062412 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:32.071833 master-0 kubenswrapper[26474]: I0223 13:28:32.071749 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:32.605526 master-0 kubenswrapper[26474]: I0223 13:28:32.605480 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:32.609466 master-0 kubenswrapper[26474]: I0223 13:28:32.609437 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:32.750595 master-0 kubenswrapper[26474]: I0223 13:28:32.749814 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dc5b-account-create-update-t78mz" event={"ID":"5a9ec263-54d6-4981-9127-e9bd62d1cf7d","Type":"ContainerDied","Data":"e715e0b0181893dff4899a5e1aef2fdf1c742c66c89f4f953fdc8632bf7ceeb0"} Feb 23 13:28:32.750595 master-0 kubenswrapper[26474]: I0223 13:28:32.749874 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e715e0b0181893dff4899a5e1aef2fdf1c742c66c89f4f953fdc8632bf7ceeb0" Feb 23 13:28:32.750595 master-0 kubenswrapper[26474]: I0223 13:28:32.749961 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dc5b-account-create-update-t78mz" Feb 23 13:28:32.756122 master-0 kubenswrapper[26474]: I0223 13:28:32.756084 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwdv9" event={"ID":"51bc3c75-4dd4-4b8d-8fab-9035be69b72d","Type":"ContainerStarted","Data":"c7d787bdca8bbc92486a487ac584a6f161fdcf326483413f480702ba48103ae4"} Feb 23 13:28:32.767759 master-0 kubenswrapper[26474]: I0223 13:28:32.765448 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q77rc" event={"ID":"e3a9da5f-fbc1-4acf-8faf-222482a33f75","Type":"ContainerDied","Data":"488142c67083ba34aa4e79e5c7b754c28e19007162aad299739d3f60cdc9a1d9"} Feb 23 13:28:32.767759 master-0 kubenswrapper[26474]: I0223 13:28:32.765520 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488142c67083ba34aa4e79e5c7b754c28e19007162aad299739d3f60cdc9a1d9" Feb 23 13:28:32.767759 master-0 kubenswrapper[26474]: I0223 13:28:32.765622 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q77rc" Feb 23 13:28:32.782728 master-0 kubenswrapper[26474]: I0223 13:28:32.782659 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt9nv\" (UniqueName: \"kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.793570 master-0 kubenswrapper[26474]: I0223 13:28:32.793496 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv" (OuterVolumeSpecName: "kube-api-access-dt9nv") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "kube-api-access-dt9nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:32.797633 master-0 kubenswrapper[26474]: I0223 13:28:32.797536 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts\") pod \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " Feb 23 13:28:32.797978 master-0 kubenswrapper[26474]: I0223 13:28:32.797956 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxq7g\" (UniqueName: \"kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g\") pod \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\" (UID: \"5a9ec263-54d6-4981-9127-e9bd62d1cf7d\") " Feb 23 13:28:32.798156 master-0 kubenswrapper[26474]: I0223 13:28:32.798104 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.798323 master-0 kubenswrapper[26474]: I0223 13:28:32.798302 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.798459 master-0 kubenswrapper[26474]: I0223 13:28:32.798443 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.798608 master-0 kubenswrapper[26474]: I0223 13:28:32.798591 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.798851 master-0 kubenswrapper[26474]: I0223 13:28:32.798831 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle\") pod \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\" (UID: \"e3a9da5f-fbc1-4acf-8faf-222482a33f75\") " Feb 23 13:28:32.800043 master-0 kubenswrapper[26474]: I0223 13:28:32.800026 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt9nv\" (UniqueName: \"kubernetes.io/projected/e3a9da5f-fbc1-4acf-8faf-222482a33f75-kube-api-access-dt9nv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.804756 master-0 kubenswrapper[26474]: I0223 13:28:32.803691 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a9ec263-54d6-4981-9127-e9bd62d1cf7d" (UID: "5a9ec263-54d6-4981-9127-e9bd62d1cf7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:32.819464 master-0 kubenswrapper[26474]: I0223 13:28:32.814185 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jwdv9" podStartSLOduration=2.546298163 podStartE2EDuration="7.814157578s" podCreationTimestamp="2026-02-23 13:28:25 +0000 UTC" firstStartedPulling="2026-02-23 13:28:27.131181293 +0000 UTC m=+828.977688970" lastFinishedPulling="2026-02-23 13:28:32.399040708 +0000 UTC m=+834.245548385" observedRunningTime="2026-02-23 13:28:32.783155643 +0000 UTC m=+834.629663320" watchObservedRunningTime="2026-02-23 13:28:32.814157578 +0000 UTC m=+834.660665255" Feb 23 13:28:32.840739 master-0 kubenswrapper[26474]: I0223 13:28:32.840660 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts" (OuterVolumeSpecName: "scripts") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:32.842056 master-0 kubenswrapper[26474]: I0223 13:28:32.841940 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g" (OuterVolumeSpecName: "kube-api-access-bxq7g") pod "5a9ec263-54d6-4981-9127-e9bd62d1cf7d" (UID: "5a9ec263-54d6-4981-9127-e9bd62d1cf7d"). InnerVolumeSpecName "kube-api-access-bxq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:32.842133 master-0 kubenswrapper[26474]: I0223 13:28:32.842097 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:32.843518 master-0 kubenswrapper[26474]: I0223 13:28:32.843469 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:32.847562 master-0 kubenswrapper[26474]: I0223 13:28:32.846439 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data" (OuterVolumeSpecName: "config-data") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:32.847992 master-0 kubenswrapper[26474]: I0223 13:28:32.847931 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3a9da5f-fbc1-4acf-8faf-222482a33f75" (UID: "e3a9da5f-fbc1-4acf-8faf-222482a33f75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:32.899967 master-0 kubenswrapper[26474]: I0223 13:28:32.899773 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q77rc"] Feb 23 13:28:32.903075 master-0 kubenswrapper[26474]: I0223 13:28:32.903012 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903075 master-0 kubenswrapper[26474]: I0223 13:28:32.903068 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903193 master-0 kubenswrapper[26474]: I0223 13:28:32.903084 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxq7g\" (UniqueName: \"kubernetes.io/projected/5a9ec263-54d6-4981-9127-e9bd62d1cf7d-kube-api-access-bxq7g\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903193 master-0 kubenswrapper[26474]: I0223 13:28:32.903100 26474 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903193 master-0 kubenswrapper[26474]: I0223 13:28:32.903114 26474 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903193 master-0 kubenswrapper[26474]: I0223 13:28:32.903126 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.903193 master-0 kubenswrapper[26474]: I0223 13:28:32.903135 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9da5f-fbc1-4acf-8faf-222482a33f75-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:32.920327 master-0 kubenswrapper[26474]: I0223 13:28:32.920245 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q77rc"] Feb 23 13:28:32.960291 master-0 kubenswrapper[26474]: I0223 13:28:32.959742 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s6q8s"] Feb 23 13:28:32.981389 master-0 kubenswrapper[26474]: E0223 13:28:32.981294 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a9da5f-fbc1-4acf-8faf-222482a33f75" containerName="keystone-bootstrap" Feb 23 13:28:32.981642 master-0 kubenswrapper[26474]: I0223 13:28:32.981419 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a9da5f-fbc1-4acf-8faf-222482a33f75" containerName="keystone-bootstrap" Feb 23 13:28:32.981642 master-0 kubenswrapper[26474]: E0223 13:28:32.981538 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9ec263-54d6-4981-9127-e9bd62d1cf7d" containerName="mariadb-account-create-update" Feb 23 13:28:32.981642 master-0 kubenswrapper[26474]: I0223 13:28:32.981548 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9ec263-54d6-4981-9127-e9bd62d1cf7d" containerName="mariadb-account-create-update" Feb 23 13:28:32.982434 master-0 kubenswrapper[26474]: I0223 13:28:32.982045 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9ec263-54d6-4981-9127-e9bd62d1cf7d" containerName="mariadb-account-create-update" Feb 23 13:28:32.982434 master-0 kubenswrapper[26474]: I0223 13:28:32.982098 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a9da5f-fbc1-4acf-8faf-222482a33f75" containerName="keystone-bootstrap" Feb 23 13:28:32.983357 master-0 kubenswrapper[26474]: I0223 13:28:32.983302 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6q8s"] Feb 23 13:28:32.983564 master-0 kubenswrapper[26474]: I0223 13:28:32.983544 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.013898 master-0 kubenswrapper[26474]: I0223 13:28:33.013782 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:28:33.117135 master-0 kubenswrapper[26474]: I0223 13:28:33.117046 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.117631 master-0 kubenswrapper[26474]: I0223 13:28:33.117168 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwgs\" (UniqueName: \"kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.118988 master-0 kubenswrapper[26474]: I0223 13:28:33.118900 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.119161 master-0 kubenswrapper[26474]: I0223 13:28:33.119146 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.119380 master-0 kubenswrapper[26474]: I0223 13:28:33.119312 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.119557 master-0 kubenswrapper[26474]: I0223 13:28:33.119531 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.137261 master-0 kubenswrapper[26474]: W0223 13:28:33.137186 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5fa92de_9c73_449e_9f0d_abbd176f1eb5.slice/crio-db52a498a865bf3d18b7d46b64dc177f138cd85ae1cbf8e8ba45260d091a1ad7 WatchSource:0}: Error finding container db52a498a865bf3d18b7d46b64dc177f138cd85ae1cbf8e8ba45260d091a1ad7: Status 404 returned error can't find the container with id db52a498a865bf3d18b7d46b64dc177f138cd85ae1cbf8e8ba45260d091a1ad7 Feb 23 13:28:33.143139 master-0 kubenswrapper[26474]: I0223 13:28:33.143094 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222219 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222323 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222402 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222434 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222502 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.222918 master-0 kubenswrapper[26474]: I0223 13:28:33.222541 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwgs\" (UniqueName: \"kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.237281 master-0 kubenswrapper[26474]: I0223 13:28:33.230321 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.237281 master-0 kubenswrapper[26474]: I0223 13:28:33.230735 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.237281 master-0 kubenswrapper[26474]: I0223 13:28:33.230926 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.237281 master-0 kubenswrapper[26474]: I0223 13:28:33.231169 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.237281 master-0 kubenswrapper[26474]: I0223 13:28:33.231553 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.241971 master-0 kubenswrapper[26474]: I0223 13:28:33.241637 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwgs\" (UniqueName: \"kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs\") pod \"keystone-bootstrap-s6q8s\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.328198 master-0 kubenswrapper[26474]: I0223 13:28:33.328129 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:33.788731 master-0 kubenswrapper[26474]: I0223 13:28:33.782287 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerStarted","Data":"587b8669d3eca70a07d5c825bbfd0552cbca861825e7d9f54124c8f4251e8d19"} Feb 23 13:28:33.788731 master-0 kubenswrapper[26474]: I0223 13:28:33.782372 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerStarted","Data":"db52a498a865bf3d18b7d46b64dc177f138cd85ae1cbf8e8ba45260d091a1ad7"} Feb 23 13:28:33.799675 master-0 kubenswrapper[26474]: I0223 13:28:33.799571 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerStarted","Data":"5d462ace45b7339e633e21bbec2030637d88bd78f71b07fe7e58244a384422c7"} Feb 23 13:28:33.799675 master-0 kubenswrapper[26474]: I0223 13:28:33.799634 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerStarted","Data":"601cf432990aaa7ea645f03e3e792058e12099dcb58e353d26074960326ab256"} Feb 23 13:28:33.860040 master-0 kubenswrapper[26474]: I0223 13:28:33.859976 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6q8s"] Feb 23 13:28:33.865737 master-0 kubenswrapper[26474]: W0223 13:28:33.865498 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe117f39_7efc_4bfd_bed4_125b46267fd6.slice/crio-7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1 WatchSource:0}: Error finding container 7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1: Status 404 returned error can't find the container with id 7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1 Feb 23 13:28:34.434691 master-0 kubenswrapper[26474]: I0223 13:28:34.434599 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a9da5f-fbc1-4acf-8faf-222482a33f75" path="/var/lib/kubelet/pods/e3a9da5f-fbc1-4acf-8faf-222482a33f75/volumes" Feb 23 13:28:34.839199 master-0 kubenswrapper[26474]: I0223 13:28:34.837891 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6q8s" event={"ID":"fe117f39-7efc-4bfd-bed4-125b46267fd6","Type":"ContainerStarted","Data":"fddb502aacf4744d457b1ef69525193d4379598c246d7ba4e6a6c4c44f335ca7"} Feb 23 13:28:34.839199 master-0 kubenswrapper[26474]: I0223 13:28:34.837946 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6q8s" event={"ID":"fe117f39-7efc-4bfd-bed4-125b46267fd6","Type":"ContainerStarted","Data":"7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1"} Feb 23 13:28:34.844130 master-0 kubenswrapper[26474]: I0223 13:28:34.843235 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerStarted","Data":"45fb6aa6006145e1f9daf21c319aa115ba9b32e777e96c9dbfcb7c0fc0c9514b"} Feb 23 13:28:34.851171 master-0 kubenswrapper[26474]: I0223 13:28:34.850587 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerStarted","Data":"0f29bec70791b04a1f83e8f9002bf09630c2b9c91ee0fb472b6424c7b608ae1b"} Feb 23 13:28:34.881431 master-0 kubenswrapper[26474]: I0223 13:28:34.881238 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s6q8s" podStartSLOduration=2.881210425 podStartE2EDuration="2.881210425s" podCreationTimestamp="2026-02-23 13:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:34.863703518 +0000 UTC m=+836.710211215" watchObservedRunningTime="2026-02-23 13:28:34.881210425 +0000 UTC m=+836.727718102" Feb 23 13:28:34.904306 master-0 kubenswrapper[26474]: I0223 13:28:34.903599 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4fec4-default-external-api-0" podStartSLOduration=4.90357587 podStartE2EDuration="4.90357587s" podCreationTimestamp="2026-02-23 13:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:34.892615424 +0000 UTC m=+836.739123111" watchObservedRunningTime="2026-02-23 13:28:34.90357587 +0000 UTC m=+836.750083547" Feb 23 13:28:34.957089 master-0 kubenswrapper[26474]: I0223 13:28:34.956982 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4fec4-default-internal-api-0" podStartSLOduration=6.956965422 podStartE2EDuration="6.956965422s" podCreationTimestamp="2026-02-23 13:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:34.952353849 +0000 UTC m=+836.798861526" watchObservedRunningTime="2026-02-23 13:28:34.956965422 +0000 UTC m=+836.803473089" Feb 23 13:28:35.775267 master-0 kubenswrapper[26474]: I0223 13:28:35.775182 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-gzjvk"] Feb 23 13:28:35.777369 master-0 kubenswrapper[26474]: I0223 13:28:35.777304 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.781424 master-0 kubenswrapper[26474]: I0223 13:28:35.781071 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 23 13:28:35.781424 master-0 kubenswrapper[26474]: I0223 13:28:35.781196 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 23 13:28:35.801181 master-0 kubenswrapper[26474]: I0223 13:28:35.797447 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-gzjvk"] Feb 23 13:28:35.841466 master-0 kubenswrapper[26474]: I0223 13:28:35.841409 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.841697 master-0 kubenswrapper[26474]: I0223 13:28:35.841584 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.841697 master-0 kubenswrapper[26474]: I0223 13:28:35.841657 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.841821 master-0 kubenswrapper[26474]: I0223 13:28:35.841731 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.841821 master-0 kubenswrapper[26474]: I0223 13:28:35.841787 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvd4\" (UniqueName: \"kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.842062 master-0 kubenswrapper[26474]: I0223 13:28:35.842031 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.944601 master-0 kubenswrapper[26474]: I0223 13:28:35.944519 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.945042 master-0 kubenswrapper[26474]: I0223 13:28:35.945007 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.945740 master-0 kubenswrapper[26474]: I0223 13:28:35.945686 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.945867 master-0 kubenswrapper[26474]: I0223 13:28:35.945827 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.946012 master-0 kubenswrapper[26474]: I0223 13:28:35.945943 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.946012 master-0 kubenswrapper[26474]: I0223 13:28:35.945994 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvd4\" (UniqueName: \"kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.947080 master-0 kubenswrapper[26474]: I0223 13:28:35.947042 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.950463 master-0 kubenswrapper[26474]: I0223 13:28:35.949186 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.952490 master-0 kubenswrapper[26474]: I0223 13:28:35.952460 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.953403 master-0 kubenswrapper[26474]: I0223 13:28:35.953294 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.966969 master-0 kubenswrapper[26474]: I0223 13:28:35.954587 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:35.972603 master-0 kubenswrapper[26474]: I0223 13:28:35.972542 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvd4\" (UniqueName: \"kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4\") pod \"ironic-db-sync-gzjvk\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:36.007032 master-0 kubenswrapper[26474]: I0223 13:28:36.006498 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:28:36.117384 master-0 kubenswrapper[26474]: I0223 13:28:36.117266 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:36.133209 master-0 kubenswrapper[26474]: I0223 13:28:36.118835 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" containerID="cri-o://f52ac843cd37fd6a277015924e303f909ddb46463aefbdea39afc30c2e6f99bd" gracePeriod=10 Feb 23 13:28:36.133209 master-0 kubenswrapper[26474]: I0223 13:28:36.132839 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:28:36.885102 master-0 kubenswrapper[26474]: I0223 13:28:36.885003 26474 generic.go:334] "Generic (PLEG): container finished" podID="51bc3c75-4dd4-4b8d-8fab-9035be69b72d" containerID="c7d787bdca8bbc92486a487ac584a6f161fdcf326483413f480702ba48103ae4" exitCode=0 Feb 23 13:28:36.885102 master-0 kubenswrapper[26474]: I0223 13:28:36.885099 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwdv9" event={"ID":"51bc3c75-4dd4-4b8d-8fab-9035be69b72d","Type":"ContainerDied","Data":"c7d787bdca8bbc92486a487ac584a6f161fdcf326483413f480702ba48103ae4"} Feb 23 13:28:36.891174 master-0 kubenswrapper[26474]: I0223 13:28:36.891059 26474 generic.go:334] "Generic (PLEG): container finished" podID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerID="f52ac843cd37fd6a277015924e303f909ddb46463aefbdea39afc30c2e6f99bd" exitCode=0 Feb 23 13:28:36.891174 master-0 kubenswrapper[26474]: I0223 13:28:36.891125 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerDied","Data":"f52ac843cd37fd6a277015924e303f909ddb46463aefbdea39afc30c2e6f99bd"} Feb 23 13:28:37.907283 master-0 kubenswrapper[26474]: I0223 13:28:37.907209 26474 generic.go:334] "Generic (PLEG): container finished" podID="fe117f39-7efc-4bfd-bed4-125b46267fd6" containerID="fddb502aacf4744d457b1ef69525193d4379598c246d7ba4e6a6c4c44f335ca7" exitCode=0 Feb 23 13:28:37.907283 master-0 kubenswrapper[26474]: I0223 13:28:37.907268 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6q8s" event={"ID":"fe117f39-7efc-4bfd-bed4-125b46267fd6","Type":"ContainerDied","Data":"fddb502aacf4744d457b1ef69525193d4379598c246d7ba4e6a6c4c44f335ca7"} Feb 23 13:28:40.306650 master-0 kubenswrapper[26474]: I0223 13:28:40.306553 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:40.306650 master-0 kubenswrapper[26474]: I0223 13:28:40.306635 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:40.534580 master-0 kubenswrapper[26474]: I0223 13:28:40.534471 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:40.534971 master-0 kubenswrapper[26474]: I0223 13:28:40.534615 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:41.387569 master-0 kubenswrapper[26474]: I0223 13:28:41.387509 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.192:5353: connect: connection refused" Feb 23 13:28:41.452482 master-0 kubenswrapper[26474]: I0223 13:28:41.452266 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:41.452482 master-0 kubenswrapper[26474]: I0223 13:28:41.452380 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:42.072089 master-0 kubenswrapper[26474]: I0223 13:28:42.071989 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:42.072089 master-0 kubenswrapper[26474]: I0223 13:28:42.072042 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:42.102550 master-0 kubenswrapper[26474]: I0223 13:28:42.102490 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:42.115387 master-0 kubenswrapper[26474]: I0223 13:28:42.115294 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:42.464493 master-0 kubenswrapper[26474]: I0223 13:28:42.464320 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:42.464493 master-0 kubenswrapper[26474]: I0223 13:28:42.464407 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:43.452191 master-0 kubenswrapper[26474]: I0223 13:28:43.452129 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:43.474251 master-0 kubenswrapper[26474]: I0223 13:28:43.474189 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:28:43.487994 master-0 kubenswrapper[26474]: I0223 13:28:43.487935 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:28:44.362545 master-0 kubenswrapper[26474]: I0223 13:28:44.361885 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:44.455901 master-0 kubenswrapper[26474]: I0223 13:28:44.455837 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:28:46.104961 master-0 kubenswrapper[26474]: I0223 13:28:46.104185 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.192:5353: connect: connection refused" Feb 23 13:28:46.778762 master-0 kubenswrapper[26474]: I0223 13:28:46.778707 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:46.791584 master-0 kubenswrapper[26474]: I0223 13:28:46.791529 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:46.870917 master-0 kubenswrapper[26474]: I0223 13:28:46.870838 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwgs\" (UniqueName: \"kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871150 master-0 kubenswrapper[26474]: I0223 13:28:46.870941 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871150 master-0 kubenswrapper[26474]: I0223 13:28:46.870996 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data\") pod \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " Feb 23 13:28:46.871150 master-0 kubenswrapper[26474]: I0223 13:28:46.871140 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcxqb\" (UniqueName: \"kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb\") pod \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " Feb 23 13:28:46.871299 master-0 kubenswrapper[26474]: I0223 13:28:46.871228 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871299 master-0 kubenswrapper[26474]: I0223 13:28:46.871266 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871422 master-0 kubenswrapper[26474]: I0223 13:28:46.871372 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871422 master-0 kubenswrapper[26474]: I0223 13:28:46.871403 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data\") pod \"fe117f39-7efc-4bfd-bed4-125b46267fd6\" (UID: \"fe117f39-7efc-4bfd-bed4-125b46267fd6\") " Feb 23 13:28:46.871513 master-0 kubenswrapper[26474]: I0223 13:28:46.871432 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts\") pod \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " Feb 23 13:28:46.871513 master-0 kubenswrapper[26474]: I0223 13:28:46.871475 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle\") pod \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " Feb 23 13:28:46.871646 master-0 kubenswrapper[26474]: I0223 13:28:46.871528 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs\") pod \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\" (UID: \"51bc3c75-4dd4-4b8d-8fab-9035be69b72d\") " Feb 23 13:28:46.872044 master-0 kubenswrapper[26474]: I0223 13:28:46.872000 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs" (OuterVolumeSpecName: "logs") pod "51bc3c75-4dd4-4b8d-8fab-9035be69b72d" (UID: "51bc3c75-4dd4-4b8d-8fab-9035be69b72d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:46.876050 master-0 kubenswrapper[26474]: I0223 13:28:46.875982 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs" (OuterVolumeSpecName: "kube-api-access-gwwgs") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "kube-api-access-gwwgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:46.876479 master-0 kubenswrapper[26474]: I0223 13:28:46.876447 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts" (OuterVolumeSpecName: "scripts") pod "51bc3c75-4dd4-4b8d-8fab-9035be69b72d" (UID: "51bc3c75-4dd4-4b8d-8fab-9035be69b72d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.877065 master-0 kubenswrapper[26474]: I0223 13:28:46.877015 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.877065 master-0 kubenswrapper[26474]: I0223 13:28:46.877041 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.877065 master-0 kubenswrapper[26474]: I0223 13:28:46.877057 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwgs\" (UniqueName: \"kubernetes.io/projected/fe117f39-7efc-4bfd-bed4-125b46267fd6-kube-api-access-gwwgs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.879190 master-0 kubenswrapper[26474]: I0223 13:28:46.879137 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.879659 master-0 kubenswrapper[26474]: I0223 13:28:46.879624 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb" (OuterVolumeSpecName: "kube-api-access-xcxqb") pod "51bc3c75-4dd4-4b8d-8fab-9035be69b72d" (UID: "51bc3c75-4dd4-4b8d-8fab-9035be69b72d"). InnerVolumeSpecName "kube-api-access-xcxqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:46.879726 master-0 kubenswrapper[26474]: I0223 13:28:46.879644 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts" (OuterVolumeSpecName: "scripts") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.881193 master-0 kubenswrapper[26474]: I0223 13:28:46.881147 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.899767 master-0 kubenswrapper[26474]: I0223 13:28:46.899696 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data" (OuterVolumeSpecName: "config-data") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.907569 master-0 kubenswrapper[26474]: I0223 13:28:46.907490 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data" (OuterVolumeSpecName: "config-data") pod "51bc3c75-4dd4-4b8d-8fab-9035be69b72d" (UID: "51bc3c75-4dd4-4b8d-8fab-9035be69b72d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.909993 master-0 kubenswrapper[26474]: I0223 13:28:46.909963 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51bc3c75-4dd4-4b8d-8fab-9035be69b72d" (UID: "51bc3c75-4dd4-4b8d-8fab-9035be69b72d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.919884 master-0 kubenswrapper[26474]: I0223 13:28:46.919810 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe117f39-7efc-4bfd-bed4-125b46267fd6" (UID: "fe117f39-7efc-4bfd-bed4-125b46267fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:46.978902 master-0 kubenswrapper[26474]: I0223 13:28:46.978855 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcxqb\" (UniqueName: \"kubernetes.io/projected/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-kube-api-access-xcxqb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.978902 master-0 kubenswrapper[26474]: I0223 13:28:46.978901 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978914 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978927 26474 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978935 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978944 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978952 26474 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe117f39-7efc-4bfd-bed4-125b46267fd6-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:46.979060 master-0 kubenswrapper[26474]: I0223 13:28:46.978961 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bc3c75-4dd4-4b8d-8fab-9035be69b72d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:47.531046 master-0 kubenswrapper[26474]: I0223 13:28:47.530980 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6q8s" event={"ID":"fe117f39-7efc-4bfd-bed4-125b46267fd6","Type":"ContainerDied","Data":"7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1"} Feb 23 13:28:47.531046 master-0 kubenswrapper[26474]: I0223 13:28:47.531013 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6q8s" Feb 23 13:28:47.531582 master-0 kubenswrapper[26474]: I0223 13:28:47.531034 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adf77470d5f125949aa4edd45c18922d0b3912ff06725692ca76427be643cc1" Feb 23 13:28:47.534278 master-0 kubenswrapper[26474]: I0223 13:28:47.533217 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwdv9" event={"ID":"51bc3c75-4dd4-4b8d-8fab-9035be69b72d","Type":"ContainerDied","Data":"41812ec43a22c7015f9a439d96378db6e9b99e58eddfcdf80af70325b3b799bc"} Feb 23 13:28:47.534278 master-0 kubenswrapper[26474]: I0223 13:28:47.533244 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41812ec43a22c7015f9a439d96378db6e9b99e58eddfcdf80af70325b3b799bc" Feb 23 13:28:47.534278 master-0 kubenswrapper[26474]: I0223 13:28:47.533273 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwdv9" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: I0223 13:28:47.926356 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: E0223 13:28:47.926921 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bc3c75-4dd4-4b8d-8fab-9035be69b72d" containerName="placement-db-sync" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: I0223 13:28:47.926936 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bc3c75-4dd4-4b8d-8fab-9035be69b72d" containerName="placement-db-sync" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: E0223 13:28:47.926948 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe117f39-7efc-4bfd-bed4-125b46267fd6" containerName="keystone-bootstrap" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: I0223 13:28:47.926955 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe117f39-7efc-4bfd-bed4-125b46267fd6" containerName="keystone-bootstrap" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: I0223 13:28:47.927256 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe117f39-7efc-4bfd-bed4-125b46267fd6" containerName="keystone-bootstrap" Feb 23 13:28:47.927364 master-0 kubenswrapper[26474]: I0223 13:28:47.927306 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bc3c75-4dd4-4b8d-8fab-9035be69b72d" containerName="placement-db-sync" Feb 23 13:28:47.934505 master-0 kubenswrapper[26474]: I0223 13:28:47.928472 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:47.934505 master-0 kubenswrapper[26474]: I0223 13:28:47.930877 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 13:28:47.934505 master-0 kubenswrapper[26474]: I0223 13:28:47.931089 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 13:28:47.934505 master-0 kubenswrapper[26474]: I0223 13:28:47.931220 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 13:28:47.934505 master-0 kubenswrapper[26474]: I0223 13:28:47.931259 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 13:28:47.954440 master-0 kubenswrapper[26474]: I0223 13:28:47.954247 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:28:48.005657 master-0 kubenswrapper[26474]: I0223 13:28:48.005597 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.005961 master-0 kubenswrapper[26474]: I0223 13:28:48.005927 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqdll\" (UniqueName: \"kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.006002 master-0 kubenswrapper[26474]: I0223 13:28:48.005981 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.006101 master-0 kubenswrapper[26474]: I0223 13:28:48.006080 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.006279 master-0 kubenswrapper[26474]: I0223 13:28:48.006249 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.006326 master-0 kubenswrapper[26474]: I0223 13:28:48.006296 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.006411 master-0 kubenswrapper[26474]: I0223 13:28:48.006314 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.010885 master-0 kubenswrapper[26474]: I0223 13:28:48.010858 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f6c5d9dcc-k6qfz"] Feb 23 13:28:48.012458 master-0 kubenswrapper[26474]: I0223 13:28:48.012434 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.015916 master-0 kubenswrapper[26474]: I0223 13:28:48.015812 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:28:48.016144 master-0 kubenswrapper[26474]: I0223 13:28:48.016120 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 13:28:48.016281 master-0 kubenswrapper[26474]: I0223 13:28:48.016258 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:28:48.016694 master-0 kubenswrapper[26474]: I0223 13:28:48.016569 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 13:28:48.016761 master-0 kubenswrapper[26474]: I0223 13:28:48.016589 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:28:48.026656 master-0 kubenswrapper[26474]: I0223 13:28:48.026497 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f6c5d9dcc-k6qfz"] Feb 23 13:28:48.109360 master-0 kubenswrapper[26474]: I0223 13:28:48.108389 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.109360 master-0 kubenswrapper[26474]: I0223 13:28:48.108457 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-internal-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109360 master-0 kubenswrapper[26474]: I0223 13:28:48.108506 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-scripts\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109360 master-0 kubenswrapper[26474]: I0223 13:28:48.108566 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqdll\" (UniqueName: \"kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.109709 master-0 kubenswrapper[26474]: I0223 13:28:48.108838 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.109709 master-0 kubenswrapper[26474]: I0223 13:28:48.109526 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.109709 master-0 kubenswrapper[26474]: I0223 13:28:48.109701 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-fernet-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109795 master-0 kubenswrapper[26474]: I0223 13:28:48.109766 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-combined-ca-bundle\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109859 master-0 kubenswrapper[26474]: I0223 13:28:48.109828 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.109904 master-0 kubenswrapper[26474]: I0223 13:28:48.109875 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-config-data\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109937 master-0 kubenswrapper[26474]: I0223 13:28:48.109905 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-public-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109937 master-0 kubenswrapper[26474]: I0223 13:28:48.109934 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-credential-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109994 master-0 kubenswrapper[26474]: I0223 13:28:48.109966 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrhr\" (UniqueName: \"kubernetes.io/projected/16a4bd6f-20fe-4899-874b-5bde9553a934-kube-api-access-kkrhr\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.109994 master-0 kubenswrapper[26474]: I0223 13:28:48.109988 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.110060 master-0 kubenswrapper[26474]: I0223 13:28:48.110007 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.110060 master-0 kubenswrapper[26474]: I0223 13:28:48.110025 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.114613 master-0 kubenswrapper[26474]: I0223 13:28:48.112823 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.114613 master-0 kubenswrapper[26474]: I0223 13:28:48.113691 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.118465 master-0 kubenswrapper[26474]: I0223 13:28:48.118390 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.129373 master-0 kubenswrapper[26474]: I0223 13:28:48.128990 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.139807 master-0 kubenswrapper[26474]: I0223 13:28:48.139738 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.149327 master-0 kubenswrapper[26474]: I0223 13:28:48.149282 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqdll\" (UniqueName: \"kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll\") pod \"placement-66dfd5f7c4-jsbfx\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.212533 master-0 kubenswrapper[26474]: I0223 13:28:48.212289 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-internal-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.212533 master-0 kubenswrapper[26474]: I0223 13:28:48.212398 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-scripts\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.212533 master-0 kubenswrapper[26474]: I0223 13:28:48.212503 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-fernet-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.212760 master-0 kubenswrapper[26474]: I0223 13:28:48.212539 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-combined-ca-bundle\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.212760 master-0 kubenswrapper[26474]: I0223 13:28:48.212597 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-config-data\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.213993 master-0 kubenswrapper[26474]: I0223 13:28:48.213298 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-public-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.213993 master-0 kubenswrapper[26474]: I0223 13:28:48.213576 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-credential-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.213993 master-0 kubenswrapper[26474]: I0223 13:28:48.213663 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrhr\" (UniqueName: \"kubernetes.io/projected/16a4bd6f-20fe-4899-874b-5bde9553a934-kube-api-access-kkrhr\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.215747 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-scripts\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.216508 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-fernet-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.216734 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-internal-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.216758 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-combined-ca-bundle\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.216843 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-credential-keys\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.219316 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-config-data\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.220299 master-0 kubenswrapper[26474]: I0223 13:28:48.219518 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a4bd6f-20fe-4899-874b-5bde9553a934-public-tls-certs\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.232646 master-0 kubenswrapper[26474]: I0223 13:28:48.231145 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrhr\" (UniqueName: \"kubernetes.io/projected/16a4bd6f-20fe-4899-874b-5bde9553a934-kube-api-access-kkrhr\") pod \"keystone-f6c5d9dcc-k6qfz\" (UID: \"16a4bd6f-20fe-4899-874b-5bde9553a934\") " pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.266683 master-0 kubenswrapper[26474]: I0223 13:28:48.265970 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:48.285421 master-0 kubenswrapper[26474]: I0223 13:28:48.285215 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:48.336828 master-0 kubenswrapper[26474]: I0223 13:28:48.336713 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.336828 master-0 kubenswrapper[26474]: I0223 13:28:48.336800 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.337048 master-0 kubenswrapper[26474]: I0223 13:28:48.336836 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.337048 master-0 kubenswrapper[26474]: I0223 13:28:48.336978 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.337114 master-0 kubenswrapper[26474]: I0223 13:28:48.337057 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rb4l\" (UniqueName: \"kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.337237 master-0 kubenswrapper[26474]: I0223 13:28:48.337210 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0\") pod \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\" (UID: \"2ec9002a-dd92-48b5-92b8-af64bf2871a5\") " Feb 23 13:28:48.354230 master-0 kubenswrapper[26474]: I0223 13:28:48.353823 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l" (OuterVolumeSpecName: "kube-api-access-2rb4l") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "kube-api-access-2rb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:48.411671 master-0 kubenswrapper[26474]: I0223 13:28:48.403721 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config" (OuterVolumeSpecName: "config") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:48.411671 master-0 kubenswrapper[26474]: I0223 13:28:48.404620 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:48.412547 master-0 kubenswrapper[26474]: I0223 13:28:48.412469 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:48.413151 master-0 kubenswrapper[26474]: I0223 13:28:48.413028 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:48.417526 master-0 kubenswrapper[26474]: I0223 13:28:48.417467 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c5c8b54d6-wsqw8"] Feb 23 13:28:48.417934 master-0 kubenswrapper[26474]: E0223 13:28:48.417895 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="init" Feb 23 13:28:48.417934 master-0 kubenswrapper[26474]: I0223 13:28:48.417915 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="init" Feb 23 13:28:48.418023 master-0 kubenswrapper[26474]: E0223 13:28:48.417961 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" Feb 23 13:28:48.418023 master-0 kubenswrapper[26474]: I0223 13:28:48.417970 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" Feb 23 13:28:48.434427 master-0 kubenswrapper[26474]: I0223 13:28:48.429948 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:48.434427 master-0 kubenswrapper[26474]: I0223 13:28:48.433735 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" containerName="dnsmasq-dns" Feb 23 13:28:48.435579 master-0 kubenswrapper[26474]: I0223 13:28:48.435413 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c5c8b54d6-wsqw8"] Feb 23 13:28:48.435579 master-0 kubenswrapper[26474]: I0223 13:28:48.435510 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.447983 master-0 kubenswrapper[26474]: I0223 13:28:48.447921 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.447983 master-0 kubenswrapper[26474]: I0223 13:28:48.447957 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.447983 master-0 kubenswrapper[26474]: I0223 13:28:48.447966 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.447983 master-0 kubenswrapper[26474]: I0223 13:28:48.447978 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.447983 master-0 kubenswrapper[26474]: I0223 13:28:48.447987 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rb4l\" (UniqueName: \"kubernetes.io/projected/2ec9002a-dd92-48b5-92b8-af64bf2871a5-kube-api-access-2rb4l\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.472480 master-0 kubenswrapper[26474]: I0223 13:28:48.472411 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ec9002a-dd92-48b5-92b8-af64bf2871a5" (UID: "2ec9002a-dd92-48b5-92b8-af64bf2871a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:48.509932 master-0 kubenswrapper[26474]: I0223 13:28:48.509794 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-gzjvk"] Feb 23 13:28:48.549729 master-0 kubenswrapper[26474]: I0223 13:28:48.549682 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-combined-ca-bundle\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550159 master-0 kubenswrapper[26474]: I0223 13:28:48.549797 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-public-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550159 master-0 kubenswrapper[26474]: I0223 13:28:48.549828 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-internal-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550159 master-0 kubenswrapper[26474]: I0223 13:28:48.549878 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3911d376-fa1c-4062-bce7-7cd07d9d3244-logs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550159 master-0 kubenswrapper[26474]: I0223 13:28:48.549903 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8972\" (UniqueName: \"kubernetes.io/projected/3911d376-fa1c-4062-bce7-7cd07d9d3244-kube-api-access-d8972\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550159 master-0 kubenswrapper[26474]: I0223 13:28:48.549932 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-scripts\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550392 master-0 kubenswrapper[26474]: I0223 13:28:48.550282 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-config-data\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.550392 master-0 kubenswrapper[26474]: I0223 13:28:48.550385 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ec9002a-dd92-48b5-92b8-af64bf2871a5-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:48.555486 master-0 kubenswrapper[26474]: I0223 13:28:48.554204 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" event={"ID":"2ec9002a-dd92-48b5-92b8-af64bf2871a5","Type":"ContainerDied","Data":"5a6e5867f7510b8195e108dc851e432e7b1d14cfc01691cb4be12390ac7d281c"} Feb 23 13:28:48.555486 master-0 kubenswrapper[26474]: I0223 13:28:48.554261 26474 scope.go:117] "RemoveContainer" containerID="f52ac843cd37fd6a277015924e303f909ddb46463aefbdea39afc30c2e6f99bd" Feb 23 13:28:48.555486 master-0 kubenswrapper[26474]: I0223 13:28:48.554404 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4b6db685-b664p" Feb 23 13:28:48.565729 master-0 kubenswrapper[26474]: I0223 13:28:48.565690 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-gzjvk" event={"ID":"53c0fb4f-cbcb-4439-97c6-0b529f807785","Type":"ContainerStarted","Data":"e8e1a0e5ec671bdbb760aca97579b107f177c033b1bc20448020ca72b508b8b6"} Feb 23 13:28:48.596673 master-0 kubenswrapper[26474]: I0223 13:28:48.596633 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:48.604418 master-0 kubenswrapper[26474]: I0223 13:28:48.604366 26474 scope.go:117] "RemoveContainer" containerID="66fc1f7ca9f0029c171b3f83e7a41bcdb6ea0766df7fc5d07cb2ff6a40c4fb70" Feb 23 13:28:48.606895 master-0 kubenswrapper[26474]: I0223 13:28:48.606835 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4b6db685-b664p"] Feb 23 13:28:48.652437 master-0 kubenswrapper[26474]: I0223 13:28:48.652376 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-public-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.652437 master-0 kubenswrapper[26474]: I0223 13:28:48.652431 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-internal-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.654318 master-0 kubenswrapper[26474]: I0223 13:28:48.653257 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3911d376-fa1c-4062-bce7-7cd07d9d3244-logs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.654318 master-0 kubenswrapper[26474]: I0223 13:28:48.653362 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8972\" (UniqueName: \"kubernetes.io/projected/3911d376-fa1c-4062-bce7-7cd07d9d3244-kube-api-access-d8972\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.654318 master-0 kubenswrapper[26474]: I0223 13:28:48.653445 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-scripts\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.654318 master-0 kubenswrapper[26474]: I0223 13:28:48.653659 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-config-data\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.654318 master-0 kubenswrapper[26474]: I0223 13:28:48.653735 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-combined-ca-bundle\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.658871 master-0 kubenswrapper[26474]: I0223 13:28:48.656812 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-public-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.658871 master-0 kubenswrapper[26474]: I0223 13:28:48.657831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3911d376-fa1c-4062-bce7-7cd07d9d3244-logs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.669461 master-0 kubenswrapper[26474]: I0223 13:28:48.659782 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-combined-ca-bundle\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.669461 master-0 kubenswrapper[26474]: I0223 13:28:48.660634 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-config-data\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.669461 master-0 kubenswrapper[26474]: I0223 13:28:48.660667 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-internal-tls-certs\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.669461 master-0 kubenswrapper[26474]: I0223 13:28:48.663817 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3911d376-fa1c-4062-bce7-7cd07d9d3244-scripts\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.674539 master-0 kubenswrapper[26474]: I0223 13:28:48.673034 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8972\" (UniqueName: \"kubernetes.io/projected/3911d376-fa1c-4062-bce7-7cd07d9d3244-kube-api-access-d8972\") pod \"placement-6c5c8b54d6-wsqw8\" (UID: \"3911d376-fa1c-4062-bce7-7cd07d9d3244\") " pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.762050 master-0 kubenswrapper[26474]: I0223 13:28:48.761999 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:48.886438 master-0 kubenswrapper[26474]: W0223 13:28:48.886154 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9725d464_c206_407c_9b4c_983607fe63d1.slice/crio-aa835b54f22612797dceb493f1dfe6b28f622cf1eec9f0680a1e0e576f3cdefa WatchSource:0}: Error finding container aa835b54f22612797dceb493f1dfe6b28f622cf1eec9f0680a1e0e576f3cdefa: Status 404 returned error can't find the container with id aa835b54f22612797dceb493f1dfe6b28f622cf1eec9f0680a1e0e576f3cdefa Feb 23 13:28:48.895549 master-0 kubenswrapper[26474]: I0223 13:28:48.891048 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:28:48.983579 master-0 kubenswrapper[26474]: I0223 13:28:48.983429 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f6c5d9dcc-k6qfz"] Feb 23 13:28:49.270487 master-0 kubenswrapper[26474]: I0223 13:28:49.270370 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c5c8b54d6-wsqw8"] Feb 23 13:28:49.284975 master-0 kubenswrapper[26474]: W0223 13:28:49.284917 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3911d376_fa1c_4062_bce7_7cd07d9d3244.slice/crio-1c94f82086d191130f6e47a4a989bdf22a710b5b11c08b0a1d05d4c8d28bbd47 WatchSource:0}: Error finding container 1c94f82086d191130f6e47a4a989bdf22a710b5b11c08b0a1d05d4c8d28bbd47: Status 404 returned error can't find the container with id 1c94f82086d191130f6e47a4a989bdf22a710b5b11c08b0a1d05d4c8d28bbd47 Feb 23 13:28:49.606441 master-0 kubenswrapper[26474]: I0223 13:28:49.606390 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerStarted","Data":"1527ec91ea674eff5ff80556700585faa0273171d38d35b5a317e4f94cfb868c"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.606449 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerStarted","Data":"f6d72a61e386612847d6ee612fc2a5240c7e5b0617d2e237633addef58284877"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.606465 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerStarted","Data":"aa835b54f22612797dceb493f1dfe6b28f622cf1eec9f0680a1e0e576f3cdefa"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.607329 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.607376 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.610814 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5c8b54d6-wsqw8" event={"ID":"3911d376-fa1c-4062-bce7-7cd07d9d3244","Type":"ContainerStarted","Data":"1b58b3aeff57ce0fa86b4376fcd4b88f45327f845cf803f1ba2147415614101d"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.610968 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5c8b54d6-wsqw8" event={"ID":"3911d376-fa1c-4062-bce7-7cd07d9d3244","Type":"ContainerStarted","Data":"1c94f82086d191130f6e47a4a989bdf22a710b5b11c08b0a1d05d4c8d28bbd47"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.613507 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f6c5d9dcc-k6qfz" event={"ID":"16a4bd6f-20fe-4899-874b-5bde9553a934","Type":"ContainerStarted","Data":"416d8e3df75ed9559559f9b8fc05dd5c56584b046bb4471b7a65f11179dcb917"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.613565 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f6c5d9dcc-k6qfz" event={"ID":"16a4bd6f-20fe-4899-874b-5bde9553a934","Type":"ContainerStarted","Data":"51e7e8da957039ceb703cb8b13427872735229b8b8f4a024255d0c1d66767b01"} Feb 23 13:28:49.617829 master-0 kubenswrapper[26474]: I0223 13:28:49.613710 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:28:49.619054 master-0 kubenswrapper[26474]: I0223 13:28:49.618911 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-db-sync-fnmxd" event={"ID":"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b","Type":"ContainerStarted","Data":"9dc44285f0812b44ea8b04d188438cc99cbeb1df9fb197a391882c425bafec46"} Feb 23 13:28:49.648958 master-0 kubenswrapper[26474]: I0223 13:28:49.648803 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66dfd5f7c4-jsbfx" podStartSLOduration=2.648786681 podStartE2EDuration="2.648786681s" podCreationTimestamp="2026-02-23 13:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:49.639039643 +0000 UTC m=+851.485547320" watchObservedRunningTime="2026-02-23 13:28:49.648786681 +0000 UTC m=+851.495294358" Feb 23 13:28:49.670572 master-0 kubenswrapper[26474]: I0223 13:28:49.670470 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-db-sync-fnmxd" podStartSLOduration=4.122528791 podStartE2EDuration="25.670452669s" podCreationTimestamp="2026-02-23 13:28:24 +0000 UTC" firstStartedPulling="2026-02-23 13:28:26.424907183 +0000 UTC m=+828.271414860" lastFinishedPulling="2026-02-23 13:28:47.972831051 +0000 UTC m=+849.819338738" observedRunningTime="2026-02-23 13:28:49.66144144 +0000 UTC m=+851.507949117" watchObservedRunningTime="2026-02-23 13:28:49.670452669 +0000 UTC m=+851.516960346" Feb 23 13:28:49.703530 master-0 kubenswrapper[26474]: I0223 13:28:49.703457 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f6c5d9dcc-k6qfz" podStartSLOduration=2.7034372639999997 podStartE2EDuration="2.703437264s" podCreationTimestamp="2026-02-23 13:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:49.688704224 +0000 UTC m=+851.535211921" watchObservedRunningTime="2026-02-23 13:28:49.703437264 +0000 UTC m=+851.549944941" Feb 23 13:28:50.408093 master-0 kubenswrapper[26474]: I0223 13:28:50.407139 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec9002a-dd92-48b5-92b8-af64bf2871a5" path="/var/lib/kubelet/pods/2ec9002a-dd92-48b5-92b8-af64bf2871a5/volumes" Feb 23 13:28:50.640787 master-0 kubenswrapper[26474]: I0223 13:28:50.639940 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5c8b54d6-wsqw8" event={"ID":"3911d376-fa1c-4062-bce7-7cd07d9d3244","Type":"ContainerStarted","Data":"375ccf88e5c6da93911fd70485c7a8eef314c8cc3910bd98efe6c9385f801f59"} Feb 23 13:28:50.641637 master-0 kubenswrapper[26474]: I0223 13:28:50.640941 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:50.641637 master-0 kubenswrapper[26474]: I0223 13:28:50.641024 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:28:50.800423 master-0 kubenswrapper[26474]: I0223 13:28:50.798555 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c5c8b54d6-wsqw8" podStartSLOduration=2.798533753 podStartE2EDuration="2.798533753s" podCreationTimestamp="2026-02-23 13:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:50.793767086 +0000 UTC m=+852.640274783" watchObservedRunningTime="2026-02-23 13:28:50.798533753 +0000 UTC m=+852.645041430" Feb 23 13:28:57.777067 master-0 kubenswrapper[26474]: I0223 13:28:57.776910 26474 generic.go:334] "Generic (PLEG): container finished" podID="40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" containerID="9dc44285f0812b44ea8b04d188438cc99cbeb1df9fb197a391882c425bafec46" exitCode=0 Feb 23 13:28:57.777067 master-0 kubenswrapper[26474]: I0223 13:28:57.776971 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-db-sync-fnmxd" event={"ID":"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b","Type":"ContainerDied","Data":"9dc44285f0812b44ea8b04d188438cc99cbeb1df9fb197a391882c425bafec46"} Feb 23 13:28:58.793762 master-0 kubenswrapper[26474]: I0223 13:28:58.793632 26474 generic.go:334] "Generic (PLEG): container finished" podID="c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" containerID="8dea65238a595dad8c07581919d15fc25d205b3b4aaeeb666eb74621aac64f66" exitCode=0 Feb 23 13:28:58.793762 master-0 kubenswrapper[26474]: I0223 13:28:58.793705 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kqjnt" event={"ID":"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3","Type":"ContainerDied","Data":"8dea65238a595dad8c07581919d15fc25d205b3b4aaeeb666eb74621aac64f66"} Feb 23 13:28:58.796671 master-0 kubenswrapper[26474]: I0223 13:28:58.796622 26474 generic.go:334] "Generic (PLEG): container finished" podID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerID="1968368b16472fa53af521ba00286a494e4197295f727ce7fba75c071ae11e34" exitCode=0 Feb 23 13:28:58.796773 master-0 kubenswrapper[26474]: I0223 13:28:58.796680 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-gzjvk" event={"ID":"53c0fb4f-cbcb-4439-97c6-0b529f807785","Type":"ContainerDied","Data":"1968368b16472fa53af521ba00286a494e4197295f727ce7fba75c071ae11e34"} Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: E0223 13:28:59.084375 26474 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/53c0fb4f-cbcb-4439-97c6-0b529f807785/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: > podSandboxID="e8e1a0e5ec671bdbb760aca97579b107f177c033b1bc20448020ca72b508b8b6" Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: E0223 13:28:59.084542 26474 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: container &Container{Name:ironic-db-sync,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/dbsync.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-gzjvk_openstack(53c0fb4f-cbcb-4439-97c6-0b529f807785): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/53c0fb4f-cbcb-4439-97c6-0b529f807785/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 23 13:28:59.085653 master-0 kubenswrapper[26474]: > logger="UnhandledError" Feb 23 13:28:59.087596 master-0 kubenswrapper[26474]: E0223 13:28:59.085885 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-db-sync\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/53c0fb4f-cbcb-4439-97c6-0b529f807785/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory\\n\"" pod="openstack/ironic-db-sync-gzjvk" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" Feb 23 13:28:59.305618 master-0 kubenswrapper[26474]: I0223 13:28:59.305578 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:28:59.449744 master-0 kubenswrapper[26474]: I0223 13:28:59.449584 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.450035 master-0 kubenswrapper[26474]: I0223 13:28:59.450006 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.450716 master-0 kubenswrapper[26474]: I0223 13:28:59.450659 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.450848 master-0 kubenswrapper[26474]: I0223 13:28:59.450799 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvzd\" (UniqueName: \"kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.450912 master-0 kubenswrapper[26474]: I0223 13:28:59.450842 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:28:59.450993 master-0 kubenswrapper[26474]: I0223 13:28:59.450892 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.451058 master-0 kubenswrapper[26474]: I0223 13:28:59.451040 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts\") pod \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\" (UID: \"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b\") " Feb 23 13:28:59.453051 master-0 kubenswrapper[26474]: I0223 13:28:59.452999 26474 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.454900 master-0 kubenswrapper[26474]: I0223 13:28:59.454814 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd" (OuterVolumeSpecName: "kube-api-access-rnvzd") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "kube-api-access-rnvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:59.461659 master-0 kubenswrapper[26474]: I0223 13:28:59.461579 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:59.462506 master-0 kubenswrapper[26474]: I0223 13:28:59.462464 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts" (OuterVolumeSpecName: "scripts") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:59.500628 master-0 kubenswrapper[26474]: I0223 13:28:59.500562 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:59.529712 master-0 kubenswrapper[26474]: I0223 13:28:59.529620 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data" (OuterVolumeSpecName: "config-data") pod "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" (UID: "40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:59.555248 master-0 kubenswrapper[26474]: I0223 13:28:59.555177 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.555248 master-0 kubenswrapper[26474]: I0223 13:28:59.555252 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvzd\" (UniqueName: \"kubernetes.io/projected/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-kube-api-access-rnvzd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.555914 master-0 kubenswrapper[26474]: I0223 13:28:59.555278 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.555914 master-0 kubenswrapper[26474]: I0223 13:28:59.555297 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.555914 master-0 kubenswrapper[26474]: I0223 13:28:59.555315 26474 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:28:59.814199 master-0 kubenswrapper[26474]: I0223 13:28:59.814099 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-db-sync-fnmxd" event={"ID":"40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b","Type":"ContainerDied","Data":"4282ef04d32e27a52e2b427b82337211c15df51cd069147de2452b7989ee3bb3"} Feb 23 13:28:59.814199 master-0 kubenswrapper[26474]: I0223 13:28:59.814186 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4282ef04d32e27a52e2b427b82337211c15df51cd069147de2452b7989ee3bb3" Feb 23 13:28:59.814199 master-0 kubenswrapper[26474]: I0223 13:28:59.814145 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-db-sync-fnmxd" Feb 23 13:29:00.281810 master-0 kubenswrapper[26474]: I0223 13:29:00.279848 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:00.283723 master-0 kubenswrapper[26474]: E0223 13:29:00.283556 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" containerName="cinder-083a9-db-sync" Feb 23 13:29:00.283723 master-0 kubenswrapper[26474]: I0223 13:29:00.283612 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" containerName="cinder-083a9-db-sync" Feb 23 13:29:00.284145 master-0 kubenswrapper[26474]: I0223 13:29:00.284067 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b" containerName="cinder-083a9-db-sync" Feb 23 13:29:00.297530 master-0 kubenswrapper[26474]: I0223 13:29:00.285767 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.298585 master-0 kubenswrapper[26474]: I0223 13:29:00.298549 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-scheduler-config-data" Feb 23 13:29:00.299321 master-0 kubenswrapper[26474]: I0223 13:29:00.298835 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-config-data" Feb 23 13:29:00.299506 master-0 kubenswrapper[26474]: I0223 13:29:00.298904 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-scripts" Feb 23 13:29:00.331613 master-0 kubenswrapper[26474]: I0223 13:29:00.317202 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:00.380249 master-0 kubenswrapper[26474]: I0223 13:29:00.380155 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.383390 master-0 kubenswrapper[26474]: I0223 13:29:00.383301 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.383751 master-0 kubenswrapper[26474]: I0223 13:29:00.383713 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.384130 master-0 kubenswrapper[26474]: I0223 13:29:00.384106 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.384473 master-0 kubenswrapper[26474]: I0223 13:29:00.384449 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lrv\" (UniqueName: \"kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.384795 master-0 kubenswrapper[26474]: I0223 13:29:00.384774 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.455070 master-0 kubenswrapper[26474]: I0223 13:29:00.449354 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:29:00.503088 master-0 kubenswrapper[26474]: I0223 13:29:00.503004 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:00.512444 master-0 kubenswrapper[26474]: E0223 13:29:00.503841 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" containerName="neutron-db-sync" Feb 23 13:29:00.512444 master-0 kubenswrapper[26474]: I0223 13:29:00.503864 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" containerName="neutron-db-sync" Feb 23 13:29:00.512444 master-0 kubenswrapper[26474]: I0223 13:29:00.504131 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" containerName="neutron-db-sync" Feb 23 13:29:00.512444 master-0 kubenswrapper[26474]: I0223 13:29:00.505594 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:00.512444 master-0 kubenswrapper[26474]: I0223 13:29:00.505685 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.514313 master-0 kubenswrapper[26474]: I0223 13:29:00.514271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.514574 master-0 kubenswrapper[26474]: I0223 13:29:00.514558 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.514701 master-0 kubenswrapper[26474]: I0223 13:29:00.514688 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.514957 master-0 kubenswrapper[26474]: I0223 13:29:00.514908 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.515162 master-0 kubenswrapper[26474]: I0223 13:29:00.515101 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.515212 master-0 kubenswrapper[26474]: I0223 13:29:00.515186 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lrv\" (UniqueName: \"kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.515263 master-0 kubenswrapper[26474]: I0223 13:29:00.515243 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.524465 master-0 kubenswrapper[26474]: I0223 13:29:00.524426 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.525577 master-0 kubenswrapper[26474]: I0223 13:29:00.525509 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.526106 master-0 kubenswrapper[26474]: I0223 13:29:00.526078 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.529928 master-0 kubenswrapper[26474]: I0223 13:29:00.529895 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.539602 master-0 kubenswrapper[26474]: I0223 13:29:00.539538 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:00.539602 master-0 kubenswrapper[26474]: I0223 13:29:00.539592 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:00.539841 master-0 kubenswrapper[26474]: I0223 13:29:00.539721 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.542956 master-0 kubenswrapper[26474]: I0223 13:29:00.542918 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-volume-lvm-iscsi-config-data" Feb 23 13:29:00.545223 master-0 kubenswrapper[26474]: I0223 13:29:00.545191 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lrv\" (UniqueName: \"kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv\") pod \"cinder-083a9-scheduler-0\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.546405 master-0 kubenswrapper[26474]: I0223 13:29:00.546368 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:00.573556 master-0 kubenswrapper[26474]: I0223 13:29:00.573487 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:00.573785 master-0 kubenswrapper[26474]: I0223 13:29:00.573626 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.575414 master-0 kubenswrapper[26474]: I0223 13:29:00.575275 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-backup-config-data" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617064 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7xmv\" (UniqueName: \"kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv\") pod \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617185 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle\") pod \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617467 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config\") pod \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\" (UID: \"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3\") " Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617868 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617915 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617933 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617952 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.617975 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618000 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618024 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618040 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618063 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqc7\" (UniqueName: \"kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618085 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618119 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618143 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618162 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618182 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618210 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618226 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618245 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618270 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlsn\" (UniqueName: \"kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618292 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618308 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.619615 master-0 kubenswrapper[26474]: I0223 13:29:00.618349 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.620701 master-0 kubenswrapper[26474]: I0223 13:29:00.620184 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv" (OuterVolumeSpecName: "kube-api-access-r7xmv") pod "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" (UID: "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3"). InnerVolumeSpecName "kube-api-access-r7xmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:00.641505 master-0 kubenswrapper[26474]: I0223 13:29:00.641269 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:00.651149 master-0 kubenswrapper[26474]: I0223 13:29:00.651092 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" (UID: "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:00.658044 master-0 kubenswrapper[26474]: I0223 13:29:00.657326 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config" (OuterVolumeSpecName: "config") pod "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3" (UID: "c6e1b664-e2c1-4b99-b1b9-9281ef4142d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.717057 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.719217 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.721102 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqc7\" (UniqueName: \"kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.721181 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.721408 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.721510 master-0 kubenswrapper[26474]: I0223 13:29:00.721500 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721512 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721597 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721656 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721668 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-api-config-data" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721689 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721713 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.721861 master-0 kubenswrapper[26474]: I0223 13:29:00.721854 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.721892 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.721962 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.721984 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.722000 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.722021 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.722058 master-0 kubenswrapper[26474]: I0223 13:29:00.722043 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.722231 master-0 kubenswrapper[26474]: I0223 13:29:00.722070 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.722231 master-0 kubenswrapper[26474]: I0223 13:29:00.722158 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.722412 master-0 kubenswrapper[26474]: I0223 13:29:00.722382 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.722651 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.722959 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.722819 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlsn\" (UniqueName: \"kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.723065 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.723225 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.723066 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.723428 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.723484 master-0 kubenswrapper[26474]: I0223 13:29:00.723450 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723496 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723519 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtdh\" (UniqueName: \"kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723561 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723647 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723687 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723722 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723820 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723851 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.723922 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.724012 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.724073 master-0 kubenswrapper[26474]: I0223 13:29:00.724063 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724093 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724127 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724173 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724246 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724290 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724316 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724486 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724509 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7xmv\" (UniqueName: \"kubernetes.io/projected/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-kube-api-access-r7xmv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:00.724556 master-0 kubenswrapper[26474]: I0223 13:29:00.724528 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e1b664-e2c1-4b99-b1b9-9281ef4142d3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:00.725944 master-0 kubenswrapper[26474]: I0223 13:29:00.724619 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.726195 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.726931 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.727006 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.727034 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.727062 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.728227 master-0 kubenswrapper[26474]: I0223 13:29:00.727666 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.731955 master-0 kubenswrapper[26474]: I0223 13:29:00.729005 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.731955 master-0 kubenswrapper[26474]: I0223 13:29:00.729403 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.731955 master-0 kubenswrapper[26474]: I0223 13:29:00.730170 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.736009 master-0 kubenswrapper[26474]: I0223 13:29:00.733988 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:00.738366 master-0 kubenswrapper[26474]: I0223 13:29:00.738308 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.740536 master-0 kubenswrapper[26474]: I0223 13:29:00.740030 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlsn\" (UniqueName: \"kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.740536 master-0 kubenswrapper[26474]: I0223 13:29:00.740466 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqc7\" (UniqueName: \"kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7\") pod \"dnsmasq-dns-79dbc6b449-l8pvh\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.826802 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.826862 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.826896 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.826938 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.826963 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827002 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827021 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827040 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827063 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827097 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827117 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtdh\" (UniqueName: \"kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827139 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827158 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827175 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827190 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827248 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827268 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827291 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827334 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827370 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827389 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6gx\" (UniqueName: \"kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.827406 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.828099 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.828134 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.829940 master-0 kubenswrapper[26474]: I0223 13:29:00.828434 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.831075 master-0 kubenswrapper[26474]: I0223 13:29:00.830652 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.831075 master-0 kubenswrapper[26474]: I0223 13:29:00.830958 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.831075 master-0 kubenswrapper[26474]: I0223 13:29:00.831009 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.831697 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.831747 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.831782 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.833191 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.833654 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.834218 master-0 kubenswrapper[26474]: I0223 13:29:00.833749 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.837521 master-0 kubenswrapper[26474]: I0223 13:29:00.837105 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.837521 master-0 kubenswrapper[26474]: I0223 13:29:00.837482 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.855674 master-0 kubenswrapper[26474]: I0223 13:29:00.854360 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-gzjvk" event={"ID":"53c0fb4f-cbcb-4439-97c6-0b529f807785","Type":"ContainerStarted","Data":"a78b3cb622ee3b208271d62b3ce2d26ba8cfa1321e5a7dda16561dac5987a158"} Feb 23 13:29:00.856848 master-0 kubenswrapper[26474]: I0223 13:29:00.856764 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kqjnt" event={"ID":"c6e1b664-e2c1-4b99-b1b9-9281ef4142d3","Type":"ContainerDied","Data":"4b291f8e6d9bae852675bc4ffc81be22921c6d6e75952b90fe4a104f0ba89a21"} Feb 23 13:29:00.856848 master-0 kubenswrapper[26474]: I0223 13:29:00.856811 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b291f8e6d9bae852675bc4ffc81be22921c6d6e75952b90fe4a104f0ba89a21" Feb 23 13:29:00.856958 master-0 kubenswrapper[26474]: I0223 13:29:00.856869 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kqjnt" Feb 23 13:29:00.864136 master-0 kubenswrapper[26474]: I0223 13:29:00.862722 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtdh\" (UniqueName: \"kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh\") pod \"cinder-083a9-backup-0\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:00.917543 master-0 kubenswrapper[26474]: I0223 13:29:00.916200 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-gzjvk" podStartSLOduration=16.431150998 podStartE2EDuration="25.916183261s" podCreationTimestamp="2026-02-23 13:28:35 +0000 UTC" firstStartedPulling="2026-02-23 13:28:48.511291338 +0000 UTC m=+850.357799015" lastFinishedPulling="2026-02-23 13:28:57.996323601 +0000 UTC m=+859.842831278" observedRunningTime="2026-02-23 13:29:00.912819288 +0000 UTC m=+862.759326965" watchObservedRunningTime="2026-02-23 13:29:00.916183261 +0000 UTC m=+862.762690938" Feb 23 13:29:00.918853 master-0 kubenswrapper[26474]: I0223 13:29:00.918322 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929775 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929842 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929890 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929927 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929952 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb6gx\" (UniqueName: \"kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.929970 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.930736 master-0 kubenswrapper[26474]: I0223 13:29:00.930024 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.935313 master-0 kubenswrapper[26474]: I0223 13:29:00.935280 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.949026 master-0 kubenswrapper[26474]: I0223 13:29:00.936446 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.971422 master-0 kubenswrapper[26474]: I0223 13:29:00.968718 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.980360 master-0 kubenswrapper[26474]: I0223 13:29:00.974038 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.980360 master-0 kubenswrapper[26474]: I0223 13:29:00.975436 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.985703 master-0 kubenswrapper[26474]: I0223 13:29:00.984301 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.986387 master-0 kubenswrapper[26474]: I0223 13:29:00.986360 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb6gx\" (UniqueName: \"kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx\") pod \"cinder-083a9-api-0\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:00.994423 master-0 kubenswrapper[26474]: I0223 13:29:00.992146 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:00.995185 master-0 kubenswrapper[26474]: I0223 13:29:00.995146 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:01.054819 master-0 kubenswrapper[26474]: I0223 13:29:01.051519 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:01.209601 master-0 kubenswrapper[26474]: I0223 13:29:01.209423 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:01.237272 master-0 kubenswrapper[26474]: I0223 13:29:01.236269 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:01.238981 master-0 kubenswrapper[26474]: I0223 13:29:01.238363 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.286575 master-0 kubenswrapper[26474]: I0223 13:29:01.286519 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:01.301062 master-0 kubenswrapper[26474]: I0223 13:29:01.300858 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:01.303289 master-0 kubenswrapper[26474]: W0223 13:29:01.303244 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cafd4cd_0870_40fb_97a2_30de667cd263.slice/crio-952bf7a79a69a43f2ccf3116db9761c573161476f097b078d9b54c548001961d WatchSource:0}: Error finding container 952bf7a79a69a43f2ccf3116db9761c573161476f097b078d9b54c548001961d: Status 404 returned error can't find the container with id 952bf7a79a69a43f2ccf3116db9761c573161476f097b078d9b54c548001961d Feb 23 13:29:01.356667 master-0 kubenswrapper[26474]: I0223 13:29:01.356617 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.356853 master-0 kubenswrapper[26474]: I0223 13:29:01.356706 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.356853 master-0 kubenswrapper[26474]: I0223 13:29:01.356732 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.356853 master-0 kubenswrapper[26474]: I0223 13:29:01.356751 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.356853 master-0 kubenswrapper[26474]: I0223 13:29:01.356846 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29w5\" (UniqueName: \"kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.356980 master-0 kubenswrapper[26474]: I0223 13:29:01.356924 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.368809 master-0 kubenswrapper[26474]: I0223 13:29:01.368734 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:01.371447 master-0 kubenswrapper[26474]: I0223 13:29:01.371398 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.378204 master-0 kubenswrapper[26474]: I0223 13:29:01.378169 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 13:29:01.380885 master-0 kubenswrapper[26474]: I0223 13:29:01.380212 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:01.381013 master-0 kubenswrapper[26474]: I0223 13:29:01.378322 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 13:29:01.381474 master-0 kubenswrapper[26474]: I0223 13:29:01.378481 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459031 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459111 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z29w5\" (UniqueName: \"kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459185 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459234 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459265 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459294 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459354 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459428 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459447 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459463 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.461463 master-0 kubenswrapper[26474]: I0223 13:29:01.459479 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cw96\" (UniqueName: \"kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.473250 master-0 kubenswrapper[26474]: I0223 13:29:01.472127 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.473250 master-0 kubenswrapper[26474]: I0223 13:29:01.472931 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.483597 master-0 kubenswrapper[26474]: I0223 13:29:01.483547 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.490602 master-0 kubenswrapper[26474]: I0223 13:29:01.490527 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z29w5\" (UniqueName: \"kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.492076 master-0 kubenswrapper[26474]: I0223 13:29:01.492041 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.494112 master-0 kubenswrapper[26474]: I0223 13:29:01.494068 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5cdb5b55-g9brn\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.572611 master-0 kubenswrapper[26474]: I0223 13:29:01.572530 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.572807 master-0 kubenswrapper[26474]: I0223 13:29:01.572676 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.572807 master-0 kubenswrapper[26474]: I0223 13:29:01.572710 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.572807 master-0 kubenswrapper[26474]: I0223 13:29:01.572740 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.572807 master-0 kubenswrapper[26474]: I0223 13:29:01.572804 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cw96\" (UniqueName: \"kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.583183 master-0 kubenswrapper[26474]: I0223 13:29:01.583142 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.602525 master-0 kubenswrapper[26474]: I0223 13:29:01.602374 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.607059 master-0 kubenswrapper[26474]: I0223 13:29:01.604104 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.607059 master-0 kubenswrapper[26474]: I0223 13:29:01.606222 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.610627 master-0 kubenswrapper[26474]: I0223 13:29:01.610528 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cw96\" (UniqueName: \"kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96\") pod \"neutron-5fb588f498-9ctlt\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.695005 master-0 kubenswrapper[26474]: I0223 13:29:01.694514 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:01.719659 master-0 kubenswrapper[26474]: I0223 13:29:01.719555 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:01.852764 master-0 kubenswrapper[26474]: I0223 13:29:01.852237 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:01.883946 master-0 kubenswrapper[26474]: I0223 13:29:01.883872 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerStarted","Data":"952bf7a79a69a43f2ccf3116db9761c573161476f097b078d9b54c548001961d"} Feb 23 13:29:01.885989 master-0 kubenswrapper[26474]: I0223 13:29:01.885929 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" event={"ID":"c931e9e9-5999-442a-9b62-8b5b7dd8749b","Type":"ContainerStarted","Data":"d7b015b5afbda972f8ef951b10fb0fc3b2422c865a6fa1e3afcbaabbd23c5643"} Feb 23 13:29:02.068499 master-0 kubenswrapper[26474]: I0223 13:29:02.068452 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:02.094134 master-0 kubenswrapper[26474]: W0223 13:29:02.094091 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5861dd3e_6dae_44d2_a52b_c56e9b12ca62.slice/crio-e13a13cdaaee3a6d34f369590d75978e709ab7dd8ad2f65061344e17b88fec79 WatchSource:0}: Error finding container e13a13cdaaee3a6d34f369590d75978e709ab7dd8ad2f65061344e17b88fec79: Status 404 returned error can't find the container with id e13a13cdaaee3a6d34f369590d75978e709ab7dd8ad2f65061344e17b88fec79 Feb 23 13:29:02.197865 master-0 kubenswrapper[26474]: I0223 13:29:02.197786 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:02.200663 master-0 kubenswrapper[26474]: W0223 13:29:02.200629 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaedc71b5_c5f8_4c3f_a9d5_3a939c84a3a8.slice/crio-3b005c902abaee5787256e92ce8307a501f67f75f3fb827acdf6634a7fb4acf2 WatchSource:0}: Error finding container 3b005c902abaee5787256e92ce8307a501f67f75f3fb827acdf6634a7fb4acf2: Status 404 returned error can't find the container with id 3b005c902abaee5787256e92ce8307a501f67f75f3fb827acdf6634a7fb4acf2 Feb 23 13:29:02.318081 master-0 kubenswrapper[26474]: I0223 13:29:02.318036 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:02.444562 master-0 kubenswrapper[26474]: I0223 13:29:02.444469 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:02.516937 master-0 kubenswrapper[26474]: I0223 13:29:02.516892 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:02.913740 master-0 kubenswrapper[26474]: I0223 13:29:02.913674 26474 generic.go:334] "Generic (PLEG): container finished" podID="c931e9e9-5999-442a-9b62-8b5b7dd8749b" containerID="b570cac1b69d72cd831670a2816cf926593cce45aeb92747d4f773f231ce7eb7" exitCode=0 Feb 23 13:29:02.920377 master-0 kubenswrapper[26474]: I0223 13:29:02.913751 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" event={"ID":"c931e9e9-5999-442a-9b62-8b5b7dd8749b","Type":"ContainerDied","Data":"b570cac1b69d72cd831670a2816cf926593cce45aeb92747d4f773f231ce7eb7"} Feb 23 13:29:02.920377 master-0 kubenswrapper[26474]: I0223 13:29:02.917192 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerStarted","Data":"69ec1315299ee08186a8d676b3bcc5b8362a6e381497b4747ce568fb25b5dc3d"} Feb 23 13:29:02.921172 master-0 kubenswrapper[26474]: I0223 13:29:02.921131 26474 generic.go:334] "Generic (PLEG): container finished" podID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerID="e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f" exitCode=0 Feb 23 13:29:02.921382 master-0 kubenswrapper[26474]: I0223 13:29:02.921284 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" event={"ID":"b203cc4c-b77a-4c7e-9303-d980d46b630b","Type":"ContainerDied","Data":"e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f"} Feb 23 13:29:02.921382 master-0 kubenswrapper[26474]: I0223 13:29:02.921313 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" event={"ID":"b203cc4c-b77a-4c7e-9303-d980d46b630b","Type":"ContainerStarted","Data":"4c9c9a1b7b87c134b2a4fb7388ecd6cfd6c1d32c02ce05f5eb7f0939799dbb40"} Feb 23 13:29:02.942685 master-0 kubenswrapper[26474]: I0223 13:29:02.942612 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerStarted","Data":"e13a13cdaaee3a6d34f369590d75978e709ab7dd8ad2f65061344e17b88fec79"} Feb 23 13:29:02.950802 master-0 kubenswrapper[26474]: I0223 13:29:02.950730 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerStarted","Data":"b041adfb5d532197cf22656a64d17c598f1422d06dc8e730c00ec6f7bc0c34b8"} Feb 23 13:29:02.950802 master-0 kubenswrapper[26474]: I0223 13:29:02.950783 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerStarted","Data":"6f213a92b33522e82ac39656954e3c0560f616ceaa241006b0b6828300c55eca"} Feb 23 13:29:02.956823 master-0 kubenswrapper[26474]: I0223 13:29:02.956732 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerStarted","Data":"3b005c902abaee5787256e92ce8307a501f67f75f3fb827acdf6634a7fb4acf2"} Feb 23 13:29:03.438156 master-0 kubenswrapper[26474]: I0223 13:29:03.438012 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535401 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535486 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535538 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwqc7\" (UniqueName: \"kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535596 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535743 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.538638 master-0 kubenswrapper[26474]: I0223 13:29:03.535832 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config\") pod \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\" (UID: \"c931e9e9-5999-442a-9b62-8b5b7dd8749b\") " Feb 23 13:29:03.547357 master-0 kubenswrapper[26474]: I0223 13:29:03.547283 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7" (OuterVolumeSpecName: "kube-api-access-hwqc7") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "kube-api-access-hwqc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:03.571371 master-0 kubenswrapper[26474]: I0223 13:29:03.570752 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:03.574155 master-0 kubenswrapper[26474]: I0223 13:29:03.572928 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config" (OuterVolumeSpecName: "config") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:03.582079 master-0 kubenswrapper[26474]: I0223 13:29:03.582019 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:03.635976 master-0 kubenswrapper[26474]: I0223 13:29:03.635903 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:03.639170 master-0 kubenswrapper[26474]: I0223 13:29:03.639072 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.639170 master-0 kubenswrapper[26474]: I0223 13:29:03.639113 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.639170 master-0 kubenswrapper[26474]: I0223 13:29:03.639126 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwqc7\" (UniqueName: \"kubernetes.io/projected/c931e9e9-5999-442a-9b62-8b5b7dd8749b-kube-api-access-hwqc7\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.639170 master-0 kubenswrapper[26474]: I0223 13:29:03.639137 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.639170 master-0 kubenswrapper[26474]: I0223 13:29:03.639148 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.827830 master-0 kubenswrapper[26474]: I0223 13:29:03.731704 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c931e9e9-5999-442a-9b62-8b5b7dd8749b" (UID: "c931e9e9-5999-442a-9b62-8b5b7dd8749b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:03.827830 master-0 kubenswrapper[26474]: I0223 13:29:03.742697 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c931e9e9-5999-442a-9b62-8b5b7dd8749b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:03.920375 master-0 kubenswrapper[26474]: I0223 13:29:03.913850 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:03.974821 master-0 kubenswrapper[26474]: I0223 13:29:03.974425 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" event={"ID":"c931e9e9-5999-442a-9b62-8b5b7dd8749b","Type":"ContainerDied","Data":"d7b015b5afbda972f8ef951b10fb0fc3b2422c865a6fa1e3afcbaabbd23c5643"} Feb 23 13:29:03.974821 master-0 kubenswrapper[26474]: I0223 13:29:03.974465 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dbc6b449-l8pvh" Feb 23 13:29:03.974821 master-0 kubenswrapper[26474]: I0223 13:29:03.974489 26474 scope.go:117] "RemoveContainer" containerID="b570cac1b69d72cd831670a2816cf926593cce45aeb92747d4f773f231ce7eb7" Feb 23 13:29:03.983475 master-0 kubenswrapper[26474]: I0223 13:29:03.982597 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerStarted","Data":"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51"} Feb 23 13:29:03.985535 master-0 kubenswrapper[26474]: I0223 13:29:03.985486 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" event={"ID":"b203cc4c-b77a-4c7e-9303-d980d46b630b","Type":"ContainerStarted","Data":"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0"} Feb 23 13:29:03.985647 master-0 kubenswrapper[26474]: I0223 13:29:03.985623 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:03.987806 master-0 kubenswrapper[26474]: I0223 13:29:03.987768 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerStarted","Data":"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61"} Feb 23 13:29:03.991070 master-0 kubenswrapper[26474]: I0223 13:29:03.989470 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerStarted","Data":"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631"} Feb 23 13:29:03.992084 master-0 kubenswrapper[26474]: I0223 13:29:03.991766 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerStarted","Data":"9b36dfa74135b1ee14813aa452de0d0c2e90a381d065c2a28e08d603f400241c"} Feb 23 13:29:03.993901 master-0 kubenswrapper[26474]: I0223 13:29:03.992693 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:04.001972 master-0 kubenswrapper[26474]: I0223 13:29:03.996411 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerStarted","Data":"7d164d3b19e62e442178275016798866803f4043cd6a260fa05f4cfd3c67da52"} Feb 23 13:29:04.090663 master-0 kubenswrapper[26474]: I0223 13:29:04.084178 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fb588f498-9ctlt" podStartSLOduration=3.084141248 podStartE2EDuration="3.084141248s" podCreationTimestamp="2026-02-23 13:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:04.052738892 +0000 UTC m=+865.899246569" watchObservedRunningTime="2026-02-23 13:29:04.084141248 +0000 UTC m=+865.930648965" Feb 23 13:29:04.096546 master-0 kubenswrapper[26474]: I0223 13:29:04.095915 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" podStartSLOduration=3.095896945 podStartE2EDuration="3.095896945s" podCreationTimestamp="2026-02-23 13:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:04.090123784 +0000 UTC m=+865.936631471" watchObservedRunningTime="2026-02-23 13:29:04.095896945 +0000 UTC m=+865.942404622" Feb 23 13:29:04.215212 master-0 kubenswrapper[26474]: I0223 13:29:04.214967 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:04.227710 master-0 kubenswrapper[26474]: I0223 13:29:04.227659 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79dbc6b449-l8pvh"] Feb 23 13:29:04.416807 master-0 kubenswrapper[26474]: I0223 13:29:04.416679 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c931e9e9-5999-442a-9b62-8b5b7dd8749b" path="/var/lib/kubelet/pods/c931e9e9-5999-442a-9b62-8b5b7dd8749b/volumes" Feb 23 13:29:05.011373 master-0 kubenswrapper[26474]: I0223 13:29:05.010895 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerStarted","Data":"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f"} Feb 23 13:29:05.019362 master-0 kubenswrapper[26474]: I0223 13:29:05.017883 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerStarted","Data":"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e"} Feb 23 13:29:05.019362 master-0 kubenswrapper[26474]: I0223 13:29:05.018096 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-api-0" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-083a9-api-log" containerID="cri-o://a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" gracePeriod=30 Feb 23 13:29:05.019362 master-0 kubenswrapper[26474]: I0223 13:29:05.018407 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:05.019362 master-0 kubenswrapper[26474]: I0223 13:29:05.018460 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-api-0" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-api" containerID="cri-o://44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" gracePeriod=30 Feb 23 13:29:05.039230 master-0 kubenswrapper[26474]: I0223 13:29:05.037739 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerStarted","Data":"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3"} Feb 23 13:29:05.056869 master-0 kubenswrapper[26474]: I0223 13:29:05.056291 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" podStartSLOduration=3.946448491 podStartE2EDuration="5.05626882s" podCreationTimestamp="2026-02-23 13:29:00 +0000 UTC" firstStartedPulling="2026-02-23 13:29:02.331894537 +0000 UTC m=+864.178402214" lastFinishedPulling="2026-02-23 13:29:03.441714866 +0000 UTC m=+865.288222543" observedRunningTime="2026-02-23 13:29:05.035049023 +0000 UTC m=+866.881556700" watchObservedRunningTime="2026-02-23 13:29:05.05626882 +0000 UTC m=+866.902776497" Feb 23 13:29:05.059889 master-0 kubenswrapper[26474]: I0223 13:29:05.059438 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerStarted","Data":"e32bda4630a090753107d70fbf4eef489e4d85c2a23a7d4934cf0b47e4f60ee8"} Feb 23 13:29:05.084094 master-0 kubenswrapper[26474]: I0223 13:29:05.083903 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-api-0" podStartSLOduration=5.083881093 podStartE2EDuration="5.083881093s" podCreationTimestamp="2026-02-23 13:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:05.080872409 +0000 UTC m=+866.927380086" watchObservedRunningTime="2026-02-23 13:29:05.083881093 +0000 UTC m=+866.930388770" Feb 23 13:29:05.121060 master-0 kubenswrapper[26474]: I0223 13:29:05.120053 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-scheduler-0" podStartSLOduration=4.449052824 podStartE2EDuration="5.120024504s" podCreationTimestamp="2026-02-23 13:29:00 +0000 UTC" firstStartedPulling="2026-02-23 13:29:01.339771687 +0000 UTC m=+863.186279364" lastFinishedPulling="2026-02-23 13:29:02.010743367 +0000 UTC m=+863.857251044" observedRunningTime="2026-02-23 13:29:05.102244311 +0000 UTC m=+866.948752008" watchObservedRunningTime="2026-02-23 13:29:05.120024504 +0000 UTC m=+866.966532181" Feb 23 13:29:05.129591 master-0 kubenswrapper[26474]: E0223 13:29:05.129544 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5861dd3e_6dae_44d2_a52b_c56e9b12ca62.slice/crio-conmon-a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:29:05.136917 master-0 kubenswrapper[26474]: I0223 13:29:05.136805 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-backup-0" podStartSLOduration=4.38365652 podStartE2EDuration="5.136784692s" podCreationTimestamp="2026-02-23 13:29:00 +0000 UTC" firstStartedPulling="2026-02-23 13:29:02.202746958 +0000 UTC m=+864.049254635" lastFinishedPulling="2026-02-23 13:29:02.95587512 +0000 UTC m=+864.802382807" observedRunningTime="2026-02-23 13:29:05.125546108 +0000 UTC m=+866.972053795" watchObservedRunningTime="2026-02-23 13:29:05.136784692 +0000 UTC m=+866.983292379" Feb 23 13:29:05.644424 master-0 kubenswrapper[26474]: I0223 13:29:05.642775 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:05.818190 master-0 kubenswrapper[26474]: I0223 13:29:05.818136 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:05.987596 master-0 kubenswrapper[26474]: I0223 13:29:05.987546 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.987724 master-0 kubenswrapper[26474]: I0223 13:29:05.987618 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.987908 master-0 kubenswrapper[26474]: I0223 13:29:05.987881 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.987951 master-0 kubenswrapper[26474]: I0223 13:29:05.987925 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.987983 master-0 kubenswrapper[26474]: I0223 13:29:05.987954 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb6gx\" (UniqueName: \"kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.988058 master-0 kubenswrapper[26474]: I0223 13:29:05.988029 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.988174 master-0 kubenswrapper[26474]: I0223 13:29:05.988149 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs\") pod \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\" (UID: \"5861dd3e-6dae-44d2-a52b-c56e9b12ca62\") " Feb 23 13:29:05.989725 master-0 kubenswrapper[26474]: I0223 13:29:05.989649 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs" (OuterVolumeSpecName: "logs") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:05.990285 master-0 kubenswrapper[26474]: I0223 13:29:05.990256 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:05.993697 master-0 kubenswrapper[26474]: I0223 13:29:05.993600 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:05.994643 master-0 kubenswrapper[26474]: I0223 13:29:05.994557 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:05.994701 master-0 kubenswrapper[26474]: I0223 13:29:05.994631 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx" (OuterVolumeSpecName: "kube-api-access-kb6gx") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "kube-api-access-kb6gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:05.995007 master-0 kubenswrapper[26474]: I0223 13:29:05.994938 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts" (OuterVolumeSpecName: "scripts") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:05.996002 master-0 kubenswrapper[26474]: I0223 13:29:05.995972 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:06.030255 master-0 kubenswrapper[26474]: I0223 13:29:06.030174 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:06.070386 master-0 kubenswrapper[26474]: I0223 13:29:06.067422 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data" (OuterVolumeSpecName: "config-data") pod "5861dd3e-6dae-44d2-a52b-c56e9b12ca62" (UID: "5861dd3e-6dae-44d2-a52b-c56e9b12ca62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:06.085805 master-0 kubenswrapper[26474]: I0223 13:29:06.085736 26474 generic.go:334] "Generic (PLEG): container finished" podID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerID="44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" exitCode=0 Feb 23 13:29:06.085805 master-0 kubenswrapper[26474]: I0223 13:29:06.085794 26474 generic.go:334] "Generic (PLEG): container finished" podID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerID="a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" exitCode=143 Feb 23 13:29:06.087078 master-0 kubenswrapper[26474]: I0223 13:29:06.087028 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerDied","Data":"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e"} Feb 23 13:29:06.087158 master-0 kubenswrapper[26474]: I0223 13:29:06.087086 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerDied","Data":"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61"} Feb 23 13:29:06.087158 master-0 kubenswrapper[26474]: I0223 13:29:06.087100 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"5861dd3e-6dae-44d2-a52b-c56e9b12ca62","Type":"ContainerDied","Data":"e13a13cdaaee3a6d34f369590d75978e709ab7dd8ad2f65061344e17b88fec79"} Feb 23 13:29:06.087158 master-0 kubenswrapper[26474]: I0223 13:29:06.087117 26474 scope.go:117] "RemoveContainer" containerID="44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" Feb 23 13:29:06.087395 master-0 kubenswrapper[26474]: I0223 13:29:06.087372 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.092788 26474 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093500 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093568 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb6gx\" (UniqueName: \"kubernetes.io/projected/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-kube-api-access-kb6gx\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093588 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093634 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093651 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.094632 master-0 kubenswrapper[26474]: I0223 13:29:06.093665 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5861dd3e-6dae-44d2-a52b-c56e9b12ca62-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:06.128090 master-0 kubenswrapper[26474]: I0223 13:29:06.128010 26474 scope.go:117] "RemoveContainer" containerID="a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" Feb 23 13:29:06.164128 master-0 kubenswrapper[26474]: I0223 13:29:06.164029 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:06.195534 master-0 kubenswrapper[26474]: I0223 13:29:06.194589 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.218473 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: E0223 13:29:06.219059 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c931e9e9-5999-442a-9b62-8b5b7dd8749b" containerName="init" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219074 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c931e9e9-5999-442a-9b62-8b5b7dd8749b" containerName="init" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: E0223 13:29:06.219103 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-083a9-api-log" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219110 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-083a9-api-log" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: E0223 13:29:06.219142 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-api" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219150 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-api" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219362 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c931e9e9-5999-442a-9b62-8b5b7dd8749b" containerName="init" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219375 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-083a9-api-log" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.219422 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" containerName="cinder-api" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.220543 26474 scope.go:117] "RemoveContainer" containerID="44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.220645 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: E0223 13:29:06.221433 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e\": container with ID starting with 44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e not found: ID does not exist" containerID="44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.221469 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e"} err="failed to get container status \"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e\": rpc error: code = NotFound desc = could not find container \"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e\": container with ID starting with 44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e not found: ID does not exist" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.221497 26474 scope.go:117] "RemoveContainer" containerID="a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: E0223 13:29:06.224554 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61\": container with ID starting with a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61 not found: ID does not exist" containerID="a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.224691 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61"} err="failed to get container status \"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61\": rpc error: code = NotFound desc = could not find container \"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61\": container with ID starting with a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61 not found: ID does not exist" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.224719 26474 scope.go:117] "RemoveContainer" containerID="44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225023 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e"} err="failed to get container status \"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e\": rpc error: code = NotFound desc = could not find container \"44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e\": container with ID starting with 44821914f630529fcdf82ff8c94234027edfac4421c4839b6786096c0dd4ad3e not found: ID does not exist" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225046 26474 scope.go:117] "RemoveContainer" containerID="a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225268 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225370 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61"} err="failed to get container status \"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61\": rpc error: code = NotFound desc = could not find container \"a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61\": container with ID starting with a74d7120c4cb40804b8cde513b348c3143d5ab32d05db19930fecb90140edc61 not found: ID does not exist" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225634 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.225847 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-api-config-data" Feb 23 13:29:06.248891 master-0 kubenswrapper[26474]: I0223 13:29:06.238032 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:06.403517 master-0 kubenswrapper[26474]: I0223 13:29:06.403457 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f5997bb-bc87-4f09-803c-65532cef8cca-logs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403517 master-0 kubenswrapper[26474]: I0223 13:29:06.403517 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403749 master-0 kubenswrapper[26474]: I0223 13:29:06.403574 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403749 master-0 kubenswrapper[26474]: I0223 13:29:06.403602 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmfkl\" (UniqueName: \"kubernetes.io/projected/9f5997bb-bc87-4f09-803c-65532cef8cca-kube-api-access-gmfkl\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403749 master-0 kubenswrapper[26474]: I0223 13:29:06.403622 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-scripts\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403749 master-0 kubenswrapper[26474]: I0223 13:29:06.403737 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403926 master-0 kubenswrapper[26474]: I0223 13:29:06.403772 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-internal-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403926 master-0 kubenswrapper[26474]: I0223 13:29:06.403809 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-public-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.403926 master-0 kubenswrapper[26474]: I0223 13:29:06.403882 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f5997bb-bc87-4f09-803c-65532cef8cca-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.406553 master-0 kubenswrapper[26474]: I0223 13:29:06.406514 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5861dd3e-6dae-44d2-a52b-c56e9b12ca62" path="/var/lib/kubelet/pods/5861dd3e-6dae-44d2-a52b-c56e9b12ca62/volumes" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506157 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-internal-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506248 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-public-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506358 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f5997bb-bc87-4f09-803c-65532cef8cca-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506475 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f5997bb-bc87-4f09-803c-65532cef8cca-logs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506509 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506576 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506602 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmfkl\" (UniqueName: \"kubernetes.io/projected/9f5997bb-bc87-4f09-803c-65532cef8cca-kube-api-access-gmfkl\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506623 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-scripts\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.508472 master-0 kubenswrapper[26474]: I0223 13:29:06.506692 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.514359 master-0 kubenswrapper[26474]: I0223 13:29:06.511375 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-internal-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.518361 master-0 kubenswrapper[26474]: I0223 13:29:06.517842 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.522362 master-0 kubenswrapper[26474]: I0223 13:29:06.521134 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-public-tls-certs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.522362 master-0 kubenswrapper[26474]: I0223 13:29:06.521731 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f5997bb-bc87-4f09-803c-65532cef8cca-etc-machine-id\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.526360 master-0 kubenswrapper[26474]: I0223 13:29:06.522804 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f5997bb-bc87-4f09-803c-65532cef8cca-logs\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.526360 master-0 kubenswrapper[26474]: I0223 13:29:06.523630 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-config-data-custom\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.531469 master-0 kubenswrapper[26474]: I0223 13:29:06.530541 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-combined-ca-bundle\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.532366 master-0 kubenswrapper[26474]: I0223 13:29:06.532103 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f5997bb-bc87-4f09-803c-65532cef8cca-scripts\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.565367 master-0 kubenswrapper[26474]: I0223 13:29:06.559015 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmfkl\" (UniqueName: \"kubernetes.io/projected/9f5997bb-bc87-4f09-803c-65532cef8cca-kube-api-access-gmfkl\") pod \"cinder-083a9-api-0\" (UID: \"9f5997bb-bc87-4f09-803c-65532cef8cca\") " pod="openstack/cinder-083a9-api-0" Feb 23 13:29:06.592311 master-0 kubenswrapper[26474]: I0223 13:29:06.592238 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:07.103429 master-0 kubenswrapper[26474]: I0223 13:29:07.103359 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-api-0"] Feb 23 13:29:07.329366 master-0 kubenswrapper[26474]: I0223 13:29:07.327832 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-655989fbf7-gkzkz"] Feb 23 13:29:07.330135 master-0 kubenswrapper[26474]: I0223 13:29:07.329789 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.332760 master-0 kubenswrapper[26474]: I0223 13:29:07.332714 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 13:29:07.332972 master-0 kubenswrapper[26474]: I0223 13:29:07.332943 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 13:29:07.391615 master-0 kubenswrapper[26474]: I0223 13:29:07.391477 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655989fbf7-gkzkz"] Feb 23 13:29:07.430613 master-0 kubenswrapper[26474]: I0223 13:29:07.430473 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5s76\" (UniqueName: \"kubernetes.io/projected/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-kube-api-access-m5s76\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430613 master-0 kubenswrapper[26474]: I0223 13:29:07.430579 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-combined-ca-bundle\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430982 master-0 kubenswrapper[26474]: I0223 13:29:07.430666 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-internal-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430982 master-0 kubenswrapper[26474]: I0223 13:29:07.430709 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-ovndb-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430982 master-0 kubenswrapper[26474]: I0223 13:29:07.430741 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-public-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430982 master-0 kubenswrapper[26474]: I0223 13:29:07.430845 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.430982 master-0 kubenswrapper[26474]: I0223 13:29:07.430940 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-httpd-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.534630 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-httpd-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.534822 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5s76\" (UniqueName: \"kubernetes.io/projected/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-kube-api-access-m5s76\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.534923 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-combined-ca-bundle\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.535234 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-internal-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.535320 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-ovndb-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.535365 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-public-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.538577 master-0 kubenswrapper[26474]: I0223 13:29:07.536548 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.543359 master-0 kubenswrapper[26474]: I0223 13:29:07.542177 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-ovndb-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.543359 master-0 kubenswrapper[26474]: I0223 13:29:07.542616 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-combined-ca-bundle\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.543359 master-0 kubenswrapper[26474]: I0223 13:29:07.543147 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-httpd-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.543597 master-0 kubenswrapper[26474]: I0223 13:29:07.543448 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-public-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.549941 master-0 kubenswrapper[26474]: I0223 13:29:07.546150 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-internal-tls-certs\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.556759 master-0 kubenswrapper[26474]: I0223 13:29:07.556689 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-config\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.559384 master-0 kubenswrapper[26474]: I0223 13:29:07.559327 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5s76\" (UniqueName: \"kubernetes.io/projected/26e2695d-d72c-443d-94c6-efb4b4a6d6fc-kube-api-access-m5s76\") pod \"neutron-655989fbf7-gkzkz\" (UID: \"26e2695d-d72c-443d-94c6-efb4b4a6d6fc\") " pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:07.697891 master-0 kubenswrapper[26474]: I0223 13:29:07.697742 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:08.127131 master-0 kubenswrapper[26474]: I0223 13:29:08.126306 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"9f5997bb-bc87-4f09-803c-65532cef8cca","Type":"ContainerStarted","Data":"0f59780c2803610b8a46d8edbfcedeb6a54530d70edf44d0062c1da621f5b5df"} Feb 23 13:29:08.127131 master-0 kubenswrapper[26474]: I0223 13:29:08.126390 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"9f5997bb-bc87-4f09-803c-65532cef8cca","Type":"ContainerStarted","Data":"dd7376453d9028d6a8f856d81df93a14aa804476243af8a23e9f0741494025a2"} Feb 23 13:29:08.365501 master-0 kubenswrapper[26474]: I0223 13:29:08.365424 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655989fbf7-gkzkz"] Feb 23 13:29:09.142133 master-0 kubenswrapper[26474]: I0223 13:29:09.141977 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655989fbf7-gkzkz" event={"ID":"26e2695d-d72c-443d-94c6-efb4b4a6d6fc","Type":"ContainerStarted","Data":"64df881500cfc2da8bef1f8ba071871c077f7cb6b88b8f3da04c3d25fa5f19e6"} Feb 23 13:29:09.142133 master-0 kubenswrapper[26474]: I0223 13:29:09.142057 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655989fbf7-gkzkz" event={"ID":"26e2695d-d72c-443d-94c6-efb4b4a6d6fc","Type":"ContainerStarted","Data":"c12b384724f5734f861f7423d8b35ff3152e7e12bf038050401ded475c4630ff"} Feb 23 13:29:10.157980 master-0 kubenswrapper[26474]: I0223 13:29:10.157901 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655989fbf7-gkzkz" event={"ID":"26e2695d-d72c-443d-94c6-efb4b4a6d6fc","Type":"ContainerStarted","Data":"73d1c64af2940b127f338b0dcec49e8f77601b7f8a131aa84ee1091fce907325"} Feb 23 13:29:10.158605 master-0 kubenswrapper[26474]: I0223 13:29:10.158063 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:10.160587 master-0 kubenswrapper[26474]: I0223 13:29:10.160479 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-api-0" event={"ID":"9f5997bb-bc87-4f09-803c-65532cef8cca","Type":"ContainerStarted","Data":"4367af7c01f4ef9636287675148cf399f08d76d8b1d92ad5c48a1363a3724901"} Feb 23 13:29:10.160701 master-0 kubenswrapper[26474]: I0223 13:29:10.160661 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:10.341751 master-0 kubenswrapper[26474]: I0223 13:29:10.341598 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-655989fbf7-gkzkz" podStartSLOduration=3.34156411 podStartE2EDuration="3.34156411s" podCreationTimestamp="2026-02-23 13:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:10.333649577 +0000 UTC m=+872.180157274" watchObservedRunningTime="2026-02-23 13:29:10.34156411 +0000 UTC m=+872.188071827" Feb 23 13:29:10.483489 master-0 kubenswrapper[26474]: I0223 13:29:10.483181 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-api-0" podStartSLOduration=4.483164292 podStartE2EDuration="4.483164292s" podCreationTimestamp="2026-02-23 13:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:10.463762669 +0000 UTC m=+872.310270366" watchObservedRunningTime="2026-02-23 13:29:10.483164292 +0000 UTC m=+872.329671969" Feb 23 13:29:10.837877 master-0 kubenswrapper[26474]: I0223 13:29:10.837809 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:11.110747 master-0 kubenswrapper[26474]: I0223 13:29:11.110425 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:11.174162 master-0 kubenswrapper[26474]: I0223 13:29:11.174084 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-scheduler-0" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="cinder-scheduler" containerID="cri-o://60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631" gracePeriod=30 Feb 23 13:29:11.174810 master-0 kubenswrapper[26474]: I0223 13:29:11.174156 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-scheduler-0" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="probe" containerID="cri-o://fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3" gracePeriod=30 Feb 23 13:29:11.213309 master-0 kubenswrapper[26474]: I0223 13:29:11.213239 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:11.214388 master-0 kubenswrapper[26474]: I0223 13:29:11.214362 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:11.541154 master-0 kubenswrapper[26474]: I0223 13:29:11.540961 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:11.697647 master-0 kubenswrapper[26474]: I0223 13:29:11.697525 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:11.861772 master-0 kubenswrapper[26474]: I0223 13:29:11.861709 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:11.970172 master-0 kubenswrapper[26474]: I0223 13:29:11.970067 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:29:11.971619 master-0 kubenswrapper[26474]: I0223 13:29:11.970477 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="dnsmasq-dns" containerID="cri-o://a1c0df0bb0508bdcba6fbeb32b39cd79503267c8352228073e3cf7ff94c6155a" gracePeriod=10 Feb 23 13:29:12.230013 master-0 kubenswrapper[26474]: I0223 13:29:12.229901 26474 generic.go:334] "Generic (PLEG): container finished" podID="8609ce54-234c-4673-a9d6-14855102d116" containerID="a1c0df0bb0508bdcba6fbeb32b39cd79503267c8352228073e3cf7ff94c6155a" exitCode=0 Feb 23 13:29:12.230013 master-0 kubenswrapper[26474]: I0223 13:29:12.229967 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" event={"ID":"8609ce54-234c-4673-a9d6-14855102d116","Type":"ContainerDied","Data":"a1c0df0bb0508bdcba6fbeb32b39cd79503267c8352228073e3cf7ff94c6155a"} Feb 23 13:29:12.233161 master-0 kubenswrapper[26474]: I0223 13:29:12.232222 26474 generic.go:334] "Generic (PLEG): container finished" podID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerID="fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3" exitCode=0 Feb 23 13:29:12.233161 master-0 kubenswrapper[26474]: I0223 13:29:12.232456 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="cinder-volume" containerID="cri-o://c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" gracePeriod=30 Feb 23 13:29:12.233161 master-0 kubenswrapper[26474]: I0223 13:29:12.232760 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerDied","Data":"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3"} Feb 23 13:29:12.233161 master-0 kubenswrapper[26474]: I0223 13:29:12.232928 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-backup-0" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="cinder-backup" containerID="cri-o://7d164d3b19e62e442178275016798866803f4043cd6a260fa05f4cfd3c67da52" gracePeriod=30 Feb 23 13:29:12.233964 master-0 kubenswrapper[26474]: I0223 13:29:12.233579 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="probe" containerID="cri-o://bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" gracePeriod=30 Feb 23 13:29:12.233964 master-0 kubenswrapper[26474]: I0223 13:29:12.233695 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-083a9-backup-0" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="probe" containerID="cri-o://e32bda4630a090753107d70fbf4eef489e4d85c2a23a7d4934cf0b47e4f60ee8" gracePeriod=30 Feb 23 13:29:12.801547 master-0 kubenswrapper[26474]: I0223 13:29:12.801479 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:29:12.841732 master-0 kubenswrapper[26474]: I0223 13:29:12.841215 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.841971 master-0 kubenswrapper[26474]: I0223 13:29:12.841927 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5h8q\" (UniqueName: \"kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.842393 master-0 kubenswrapper[26474]: I0223 13:29:12.842371 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.842458 master-0 kubenswrapper[26474]: I0223 13:29:12.842402 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.842513 master-0 kubenswrapper[26474]: I0223 13:29:12.842478 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.842560 master-0 kubenswrapper[26474]: I0223 13:29:12.842529 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb\") pod \"8609ce54-234c-4673-a9d6-14855102d116\" (UID: \"8609ce54-234c-4673-a9d6-14855102d116\") " Feb 23 13:29:12.846404 master-0 kubenswrapper[26474]: I0223 13:29:12.845810 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q" (OuterVolumeSpecName: "kube-api-access-p5h8q") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "kube-api-access-p5h8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:12.948441 master-0 kubenswrapper[26474]: I0223 13:29:12.933317 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:12.948441 master-0 kubenswrapper[26474]: I0223 13:29:12.947607 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5h8q\" (UniqueName: \"kubernetes.io/projected/8609ce54-234c-4673-a9d6-14855102d116-kube-api-access-p5h8q\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:12.948441 master-0 kubenswrapper[26474]: I0223 13:29:12.947643 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:12.964484 master-0 kubenswrapper[26474]: I0223 13:29:12.959889 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:13.025928 master-0 kubenswrapper[26474]: I0223 13:29:13.025863 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:13.035809 master-0 kubenswrapper[26474]: I0223 13:29:13.035690 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config" (OuterVolumeSpecName: "config") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:13.045257 master-0 kubenswrapper[26474]: I0223 13:29:13.045187 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8609ce54-234c-4673-a9d6-14855102d116" (UID: "8609ce54-234c-4673-a9d6-14855102d116"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:13.049731 master-0 kubenswrapper[26474]: I0223 13:29:13.049659 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.049847 master-0 kubenswrapper[26474]: I0223 13:29:13.049734 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.049847 master-0 kubenswrapper[26474]: I0223 13:29:13.049752 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.049847 master-0 kubenswrapper[26474]: I0223 13:29:13.049768 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8609ce54-234c-4673-a9d6-14855102d116-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.219828 master-0 kubenswrapper[26474]: I0223 13:29:13.219706 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.253767 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.253919 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.253916 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys" (OuterVolumeSpecName: "sys") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.253952 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.253986 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254006 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254025 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254037 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254124 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254161 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254180 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254220 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254243 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254262 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254261 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254325 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254330 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knlsn\" (UniqueName: \"kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254361 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254388 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254435 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run" (OuterVolumeSpecName: "run") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254654 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254705 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254752 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev\") pod \"a472c463-7365-4002-9785-ff5f086873f7\" (UID: \"a472c463-7365-4002-9785-ff5f086873f7\") " Feb 23 13:29:13.254893 master-0 kubenswrapper[26474]: I0223 13:29:13.254782 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.255047 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev" (OuterVolumeSpecName: "dev") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.255933 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.255960 26474 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.255979 26474 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.255993 26474 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256008 26474 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256019 26474 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-dev\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256030 26474 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-sys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256041 26474 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256053 26474 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.256274 master-0 kubenswrapper[26474]: I0223 13:29:13.256064 26474 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a472c463-7365-4002-9785-ff5f086873f7-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.269561 master-0 kubenswrapper[26474]: I0223 13:29:13.268732 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.269561 master-0 kubenswrapper[26474]: I0223 13:29:13.268782 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts" (OuterVolumeSpecName: "scripts") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.270958 26474 generic.go:334] "Generic (PLEG): container finished" podID="a472c463-7365-4002-9785-ff5f086873f7" containerID="bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" exitCode=0 Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271008 26474 generic.go:334] "Generic (PLEG): container finished" podID="a472c463-7365-4002-9785-ff5f086873f7" containerID="c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" exitCode=0 Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271061 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerDied","Data":"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f"} Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271096 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerDied","Data":"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51"} Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271110 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"a472c463-7365-4002-9785-ff5f086873f7","Type":"ContainerDied","Data":"69ec1315299ee08186a8d676b3bcc5b8362a6e381497b4747ce568fb25b5dc3d"} Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271130 26474 scope.go:117] "RemoveContainer" containerID="bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" Feb 23 13:29:13.274031 master-0 kubenswrapper[26474]: I0223 13:29:13.271296 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.277189 master-0 kubenswrapper[26474]: I0223 13:29:13.275222 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" event={"ID":"8609ce54-234c-4673-a9d6-14855102d116","Type":"ContainerDied","Data":"4b842e019ab10f1ab12c939f8406a79ff134d9102af4ff3ab3135701709405d6"} Feb 23 13:29:13.277189 master-0 kubenswrapper[26474]: I0223 13:29:13.275296 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7544b46fd7-pt6kw" Feb 23 13:29:13.299364 master-0 kubenswrapper[26474]: I0223 13:29:13.296797 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn" (OuterVolumeSpecName: "kube-api-access-knlsn") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "kube-api-access-knlsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:13.361608 master-0 kubenswrapper[26474]: I0223 13:29:13.355499 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:29:13.361608 master-0 kubenswrapper[26474]: I0223 13:29:13.358188 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.361608 master-0 kubenswrapper[26474]: I0223 13:29:13.358242 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knlsn\" (UniqueName: \"kubernetes.io/projected/a472c463-7365-4002-9785-ff5f086873f7-kube-api-access-knlsn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.361608 master-0 kubenswrapper[26474]: I0223 13:29:13.358257 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.366886 master-0 kubenswrapper[26474]: I0223 13:29:13.365228 26474 scope.go:117] "RemoveContainer" containerID="c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" Feb 23 13:29:13.376077 master-0 kubenswrapper[26474]: I0223 13:29:13.373996 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7544b46fd7-pt6kw"] Feb 23 13:29:13.405622 master-0 kubenswrapper[26474]: I0223 13:29:13.405540 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.411757 master-0 kubenswrapper[26474]: I0223 13:29:13.411710 26474 scope.go:117] "RemoveContainer" containerID="bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" Feb 23 13:29:13.412633 master-0 kubenswrapper[26474]: E0223 13:29:13.412571 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f\": container with ID starting with bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f not found: ID does not exist" containerID="bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" Feb 23 13:29:13.412719 master-0 kubenswrapper[26474]: I0223 13:29:13.412633 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f"} err="failed to get container status \"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f\": rpc error: code = NotFound desc = could not find container \"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f\": container with ID starting with bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f not found: ID does not exist" Feb 23 13:29:13.412719 master-0 kubenswrapper[26474]: I0223 13:29:13.412666 26474 scope.go:117] "RemoveContainer" containerID="c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" Feb 23 13:29:13.413181 master-0 kubenswrapper[26474]: E0223 13:29:13.413010 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51\": container with ID starting with c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51 not found: ID does not exist" containerID="c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" Feb 23 13:29:13.413181 master-0 kubenswrapper[26474]: I0223 13:29:13.413054 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51"} err="failed to get container status \"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51\": rpc error: code = NotFound desc = could not find container \"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51\": container with ID starting with c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51 not found: ID does not exist" Feb 23 13:29:13.413181 master-0 kubenswrapper[26474]: I0223 13:29:13.413084 26474 scope.go:117] "RemoveContainer" containerID="bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f" Feb 23 13:29:13.413357 master-0 kubenswrapper[26474]: I0223 13:29:13.413317 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f"} err="failed to get container status \"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f\": rpc error: code = NotFound desc = could not find container \"bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f\": container with ID starting with bb4fc4da395949bd18d4d565393bb31ddefe009e9d3097d07052e2b7e1ae7b1f not found: ID does not exist" Feb 23 13:29:13.413357 master-0 kubenswrapper[26474]: I0223 13:29:13.413355 26474 scope.go:117] "RemoveContainer" containerID="c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51" Feb 23 13:29:13.413548 master-0 kubenswrapper[26474]: I0223 13:29:13.413518 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51"} err="failed to get container status \"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51\": rpc error: code = NotFound desc = could not find container \"c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51\": container with ID starting with c6d7c4a80de965c9550641ad70b979e21d062cc16eed9eb206c0bafc8531eb51 not found: ID does not exist" Feb 23 13:29:13.413548 master-0 kubenswrapper[26474]: I0223 13:29:13.413539 26474 scope.go:117] "RemoveContainer" containerID="a1c0df0bb0508bdcba6fbeb32b39cd79503267c8352228073e3cf7ff94c6155a" Feb 23 13:29:13.442910 master-0 kubenswrapper[26474]: I0223 13:29:13.442786 26474 scope.go:117] "RemoveContainer" containerID="20e1e8edcd43ae2b632d5fec61a5cbff24fa01fe2d9b45a36f77345cac29460f" Feb 23 13:29:13.454512 master-0 kubenswrapper[26474]: I0223 13:29:13.454453 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data" (OuterVolumeSpecName: "config-data") pod "a472c463-7365-4002-9785-ff5f086873f7" (UID: "a472c463-7365-4002-9785-ff5f086873f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.460724 master-0 kubenswrapper[26474]: I0223 13:29:13.460590 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.460724 master-0 kubenswrapper[26474]: I0223 13:29:13.460611 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a472c463-7365-4002-9785-ff5f086873f7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.674381 master-0 kubenswrapper[26474]: I0223 13:29:13.673469 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:13.687449 master-0 kubenswrapper[26474]: I0223 13:29:13.687366 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:13.710131 master-0 kubenswrapper[26474]: I0223 13:29:13.710028 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: E0223 13:29:13.710721 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="cinder-volume" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.710752 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="cinder-volume" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: E0223 13:29:13.710778 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="probe" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.710788 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="probe" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: E0223 13:29:13.710806 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="init" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.710814 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="init" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: E0223 13:29:13.711063 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="dnsmasq-dns" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.711074 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="dnsmasq-dns" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.713105 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="8609ce54-234c-4673-a9d6-14855102d116" containerName="dnsmasq-dns" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.713158 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="probe" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.713215 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="a472c463-7365-4002-9785-ff5f086873f7" containerName="cinder-volume" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.715416 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.728491 master-0 kubenswrapper[26474]: I0223 13:29:13.718204 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-volume-lvm-iscsi-config-data" Feb 23 13:29:13.743450 master-0 kubenswrapper[26474]: I0223 13:29:13.731619 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:13.768361 master-0 kubenswrapper[26474]: I0223 13:29:13.768315 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:13.775890 master-0 kubenswrapper[26474]: I0223 13:29:13.775826 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.775932 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.775987 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776008 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776052 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776077 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776127 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776201 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776404 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776612 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.776840 master-0 kubenswrapper[26474]: I0223 13:29:13.776681 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.777474 master-0 kubenswrapper[26474]: I0223 13:29:13.776874 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.777474 master-0 kubenswrapper[26474]: I0223 13:29:13.776934 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.777474 master-0 kubenswrapper[26474]: I0223 13:29:13.776973 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.777474 master-0 kubenswrapper[26474]: I0223 13:29:13.777133 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6kn\" (UniqueName: \"kubernetes.io/projected/206b569e-fd7d-4b95-9655-aeffa25d2dda-kube-api-access-fv6kn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878417 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878527 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878637 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6lrv\" (UniqueName: \"kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878671 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878701 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.878886 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data\") pod \"7cafd4cd-0870-40fb-97a2-30de667cd263\" (UID: \"7cafd4cd-0870-40fb-97a2-30de667cd263\") " Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879171 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879224 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879249 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879285 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879302 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879323 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879411 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6kn\" (UniqueName: \"kubernetes.io/projected/206b569e-fd7d-4b95-9655-aeffa25d2dda-kube-api-access-fv6kn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879461 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879502 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879526 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879541 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879568 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879590 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879621 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879640 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879731 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-lib-modules\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.881374 master-0 kubenswrapper[26474]: I0223 13:29:13.879802 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:13.883211 master-0 kubenswrapper[26474]: I0223 13:29:13.883158 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-machine-id\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.883211 master-0 kubenswrapper[26474]: I0223 13:29:13.883199 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-dev\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.883375 master-0 kubenswrapper[26474]: I0223 13:29:13.883200 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-brick\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.883375 master-0 kubenswrapper[26474]: I0223 13:29:13.883248 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-run\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.883375 master-0 kubenswrapper[26474]: I0223 13:29:13.883279 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-lib-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.884154 master-0 kubenswrapper[26474]: I0223 13:29:13.884026 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-iscsi\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.884595 master-0 kubenswrapper[26474]: I0223 13:29:13.884551 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-sys\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.884690 master-0 kubenswrapper[26474]: I0223 13:29:13.884573 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-etc-nvme\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.885370 master-0 kubenswrapper[26474]: I0223 13:29:13.885325 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/206b569e-fd7d-4b95-9655-aeffa25d2dda-var-locks-cinder\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.886404 master-0 kubenswrapper[26474]: I0223 13:29:13.886241 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts" (OuterVolumeSpecName: "scripts") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.886695 master-0 kubenswrapper[26474]: I0223 13:29:13.886669 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data-custom\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.887989 master-0 kubenswrapper[26474]: I0223 13:29:13.887924 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-scripts\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.888641 master-0 kubenswrapper[26474]: I0223 13:29:13.888609 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-config-data\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.897371 master-0 kubenswrapper[26474]: I0223 13:29:13.897286 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:13.899526 master-0 kubenswrapper[26474]: I0223 13:29:13.898974 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv" (OuterVolumeSpecName: "kube-api-access-q6lrv") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "kube-api-access-q6lrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:13.899648 master-0 kubenswrapper[26474]: I0223 13:29:13.899519 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206b569e-fd7d-4b95-9655-aeffa25d2dda-combined-ca-bundle\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.902561 master-0 kubenswrapper[26474]: I0223 13:29:13.902522 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6kn\" (UniqueName: \"kubernetes.io/projected/206b569e-fd7d-4b95-9655-aeffa25d2dda-kube-api-access-fv6kn\") pod \"cinder-083a9-volume-lvm-iscsi-0\" (UID: \"206b569e-fd7d-4b95-9655-aeffa25d2dda\") " pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:13.981627 master-0 kubenswrapper[26474]: I0223 13:29:13.981561 26474 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cafd4cd-0870-40fb-97a2-30de667cd263-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.981627 master-0 kubenswrapper[26474]: I0223 13:29:13.981610 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6lrv\" (UniqueName: \"kubernetes.io/projected/7cafd4cd-0870-40fb-97a2-30de667cd263-kube-api-access-q6lrv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.981627 master-0 kubenswrapper[26474]: I0223 13:29:13.981623 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.981627 master-0 kubenswrapper[26474]: I0223 13:29:13.981632 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:13.987803 master-0 kubenswrapper[26474]: I0223 13:29:13.987726 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:14.038256 master-0 kubenswrapper[26474]: I0223 13:29:14.038176 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data" (OuterVolumeSpecName: "config-data") pod "7cafd4cd-0870-40fb-97a2-30de667cd263" (UID: "7cafd4cd-0870-40fb-97a2-30de667cd263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:14.065231 master-0 kubenswrapper[26474]: I0223 13:29:14.064381 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:14.084995 master-0 kubenswrapper[26474]: I0223 13:29:14.084949 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:14.085247 master-0 kubenswrapper[26474]: I0223 13:29:14.085229 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cafd4cd-0870-40fb-97a2-30de667cd263-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:14.299259 master-0 kubenswrapper[26474]: I0223 13:29:14.299189 26474 generic.go:334] "Generic (PLEG): container finished" podID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerID="e32bda4630a090753107d70fbf4eef489e4d85c2a23a7d4934cf0b47e4f60ee8" exitCode=0 Feb 23 13:29:14.299259 master-0 kubenswrapper[26474]: I0223 13:29:14.299239 26474 generic.go:334] "Generic (PLEG): container finished" podID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerID="7d164d3b19e62e442178275016798866803f4043cd6a260fa05f4cfd3c67da52" exitCode=0 Feb 23 13:29:14.299787 master-0 kubenswrapper[26474]: I0223 13:29:14.299303 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerDied","Data":"e32bda4630a090753107d70fbf4eef489e4d85c2a23a7d4934cf0b47e4f60ee8"} Feb 23 13:29:14.299787 master-0 kubenswrapper[26474]: I0223 13:29:14.299357 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerDied","Data":"7d164d3b19e62e442178275016798866803f4043cd6a260fa05f4cfd3c67da52"} Feb 23 13:29:14.322513 master-0 kubenswrapper[26474]: I0223 13:29:14.322430 26474 generic.go:334] "Generic (PLEG): container finished" podID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerID="60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631" exitCode=0 Feb 23 13:29:14.322855 master-0 kubenswrapper[26474]: I0223 13:29:14.322611 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.323082 master-0 kubenswrapper[26474]: I0223 13:29:14.322918 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerDied","Data":"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631"} Feb 23 13:29:14.323082 master-0 kubenswrapper[26474]: I0223 13:29:14.322954 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"7cafd4cd-0870-40fb-97a2-30de667cd263","Type":"ContainerDied","Data":"952bf7a79a69a43f2ccf3116db9761c573161476f097b078d9b54c548001961d"} Feb 23 13:29:14.323082 master-0 kubenswrapper[26474]: I0223 13:29:14.322983 26474 scope.go:117] "RemoveContainer" containerID="fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3" Feb 23 13:29:14.386744 master-0 kubenswrapper[26474]: I0223 13:29:14.386639 26474 scope.go:117] "RemoveContainer" containerID="60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631" Feb 23 13:29:14.455314 master-0 kubenswrapper[26474]: I0223 13:29:14.454750 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8609ce54-234c-4673-a9d6-14855102d116" path="/var/lib/kubelet/pods/8609ce54-234c-4673-a9d6-14855102d116/volumes" Feb 23 13:29:14.456399 master-0 kubenswrapper[26474]: I0223 13:29:14.455882 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a472c463-7365-4002-9785-ff5f086873f7" path="/var/lib/kubelet/pods/a472c463-7365-4002-9785-ff5f086873f7/volumes" Feb 23 13:29:14.457238 master-0 kubenswrapper[26474]: I0223 13:29:14.456840 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:14.457238 master-0 kubenswrapper[26474]: I0223 13:29:14.456883 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:14.466740 master-0 kubenswrapper[26474]: I0223 13:29:14.466661 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:14.467891 master-0 kubenswrapper[26474]: E0223 13:29:14.467857 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="probe" Feb 23 13:29:14.467891 master-0 kubenswrapper[26474]: I0223 13:29:14.467885 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="probe" Feb 23 13:29:14.469362 master-0 kubenswrapper[26474]: E0223 13:29:14.467906 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="cinder-scheduler" Feb 23 13:29:14.469362 master-0 kubenswrapper[26474]: I0223 13:29:14.467914 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="cinder-scheduler" Feb 23 13:29:14.469362 master-0 kubenswrapper[26474]: I0223 13:29:14.468174 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="cinder-scheduler" Feb 23 13:29:14.469362 master-0 kubenswrapper[26474]: I0223 13:29:14.468203 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" containerName="probe" Feb 23 13:29:14.469506 master-0 kubenswrapper[26474]: I0223 13:29:14.469488 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.475926 master-0 kubenswrapper[26474]: I0223 13:29:14.474250 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-scheduler-config-data" Feb 23 13:29:14.490508 master-0 kubenswrapper[26474]: I0223 13:29:14.490451 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:14.512613 master-0 kubenswrapper[26474]: I0223 13:29:14.512574 26474 scope.go:117] "RemoveContainer" containerID="fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3" Feb 23 13:29:14.515479 master-0 kubenswrapper[26474]: E0223 13:29:14.514848 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3\": container with ID starting with fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3 not found: ID does not exist" containerID="fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3" Feb 23 13:29:14.515479 master-0 kubenswrapper[26474]: I0223 13:29:14.514901 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3"} err="failed to get container status \"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3\": rpc error: code = NotFound desc = could not find container \"fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3\": container with ID starting with fe89c16a4739fb4886b728169c4803881e50499c66b4c23de52d46e3432d46f3 not found: ID does not exist" Feb 23 13:29:14.515479 master-0 kubenswrapper[26474]: I0223 13:29:14.514929 26474 scope.go:117] "RemoveContainer" containerID="60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631" Feb 23 13:29:14.515479 master-0 kubenswrapper[26474]: E0223 13:29:14.515266 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631\": container with ID starting with 60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631 not found: ID does not exist" containerID="60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631" Feb 23 13:29:14.515479 master-0 kubenswrapper[26474]: I0223 13:29:14.515287 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631"} err="failed to get container status \"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631\": rpc error: code = NotFound desc = could not find container \"60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631\": container with ID starting with 60a70bde3b1e2d8eb78bf61b307d61e4478f60a9f8e677d52c719d4643167631 not found: ID does not exist" Feb 23 13:29:14.612913 master-0 kubenswrapper[26474]: I0223 13:29:14.612861 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.612996 master-0 kubenswrapper[26474]: I0223 13:29:14.612916 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.613097 master-0 kubenswrapper[26474]: I0223 13:29:14.613071 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.613295 master-0 kubenswrapper[26474]: I0223 13:29:14.613261 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zf2f\" (UniqueName: \"kubernetes.io/projected/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-kube-api-access-4zf2f\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.613333 master-0 kubenswrapper[26474]: I0223 13:29:14.613304 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.613422 master-0 kubenswrapper[26474]: I0223 13:29:14.613400 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.614736 master-0 kubenswrapper[26474]: I0223 13:29:14.614689 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-volume-lvm-iscsi-0"] Feb 23 13:29:14.716421 master-0 kubenswrapper[26474]: I0223 13:29:14.716160 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.716421 master-0 kubenswrapper[26474]: I0223 13:29:14.716287 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zf2f\" (UniqueName: \"kubernetes.io/projected/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-kube-api-access-4zf2f\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.716421 master-0 kubenswrapper[26474]: I0223 13:29:14.716429 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.717557 master-0 kubenswrapper[26474]: I0223 13:29:14.716530 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.717659 master-0 kubenswrapper[26474]: I0223 13:29:14.717629 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.717718 master-0 kubenswrapper[26474]: I0223 13:29:14.717659 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.722080 master-0 kubenswrapper[26474]: I0223 13:29:14.719643 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-etc-machine-id\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.723043 master-0 kubenswrapper[26474]: I0223 13:29:14.721781 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-combined-ca-bundle\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.723043 master-0 kubenswrapper[26474]: I0223 13:29:14.722126 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data-custom\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.723043 master-0 kubenswrapper[26474]: I0223 13:29:14.722647 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-scripts\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.723689 master-0 kubenswrapper[26474]: I0223 13:29:14.723615 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-config-data\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.735030 master-0 kubenswrapper[26474]: I0223 13:29:14.734953 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zf2f\" (UniqueName: \"kubernetes.io/projected/f75e3be8-7018-4e2a-a798-f6e7d9a972ef-kube-api-access-4zf2f\") pod \"cinder-083a9-scheduler-0\" (UID: \"f75e3be8-7018-4e2a-a798-f6e7d9a972ef\") " pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.802645 master-0 kubenswrapper[26474]: I0223 13:29:14.801928 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:14.886021 master-0 kubenswrapper[26474]: I0223 13:29:14.879016 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.049111 master-0 kubenswrapper[26474]: I0223 13:29:15.048868 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049111 master-0 kubenswrapper[26474]: I0223 13:29:15.048976 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049123 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049141 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049180 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049198 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049259 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049307 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049332 master-0 kubenswrapper[26474]: I0223 13:29:15.049329 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049556 master-0 kubenswrapper[26474]: I0223 13:29:15.049418 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049556 master-0 kubenswrapper[26474]: I0223 13:29:15.049462 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049556 master-0 kubenswrapper[26474]: I0223 13:29:15.049505 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049556 master-0 kubenswrapper[26474]: I0223 13:29:15.049524 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049556 master-0 kubenswrapper[26474]: I0223 13:29:15.049545 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbtdh\" (UniqueName: \"kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.049700 master-0 kubenswrapper[26474]: I0223 13:29:15.049597 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick\") pod \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\" (UID: \"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8\") " Feb 23 13:29:15.050335 master-0 kubenswrapper[26474]: I0223 13:29:15.050092 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.050430 master-0 kubenswrapper[26474]: I0223 13:29:15.050211 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.050466 master-0 kubenswrapper[26474]: I0223 13:29:15.050438 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run" (OuterVolumeSpecName: "run") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.050501 master-0 kubenswrapper[26474]: I0223 13:29:15.050482 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.050536 master-0 kubenswrapper[26474]: I0223 13:29:15.050520 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys" (OuterVolumeSpecName: "sys") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.057513 master-0 kubenswrapper[26474]: I0223 13:29:15.052286 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev" (OuterVolumeSpecName: "dev") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.057513 master-0 kubenswrapper[26474]: I0223 13:29:15.052375 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.057513 master-0 kubenswrapper[26474]: I0223 13:29:15.052400 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.057513 master-0 kubenswrapper[26474]: I0223 13:29:15.053176 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.057513 master-0 kubenswrapper[26474]: I0223 13:29:15.053899 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:29:15.066591 master-0 kubenswrapper[26474]: I0223 13:29:15.062750 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:15.066591 master-0 kubenswrapper[26474]: I0223 13:29:15.063890 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts" (OuterVolumeSpecName: "scripts") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:15.067126 master-0 kubenswrapper[26474]: I0223 13:29:15.067049 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh" (OuterVolumeSpecName: "kube-api-access-pbtdh") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "kube-api-access-pbtdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:15.119859 master-0 kubenswrapper[26474]: I0223 13:29:15.119773 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155311 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155384 26474 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-dev\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155394 26474 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155405 26474 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155414 26474 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155423 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155431 26474 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155439 26474 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155449 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbtdh\" (UniqueName: \"kubernetes.io/projected/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-kube-api-access-pbtdh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155459 26474 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155470 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155478 26474 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155515 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.167662 master-0 kubenswrapper[26474]: I0223 13:29:15.155525 26474 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-sys\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.220669 master-0 kubenswrapper[26474]: I0223 13:29:15.220587 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data" (OuterVolumeSpecName: "config-data") pod "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" (UID: "aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:15.260465 master-0 kubenswrapper[26474]: I0223 13:29:15.259670 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:15.339711 master-0 kubenswrapper[26474]: I0223 13:29:15.338080 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-scheduler-0"] Feb 23 13:29:15.374563 master-0 kubenswrapper[26474]: I0223 13:29:15.373800 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"206b569e-fd7d-4b95-9655-aeffa25d2dda","Type":"ContainerStarted","Data":"696ad8ef2e3e8d8a6982b10327761073d5f2d8caaf97ab0940aed2a523ef130c"} Feb 23 13:29:15.374563 master-0 kubenswrapper[26474]: I0223 13:29:15.373857 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"206b569e-fd7d-4b95-9655-aeffa25d2dda","Type":"ContainerStarted","Data":"ff0d2c79b1670fe6f9766365c8120c3600ae902baacb803434ac6a58286b0661"} Feb 23 13:29:15.374563 master-0 kubenswrapper[26474]: I0223 13:29:15.373869 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" event={"ID":"206b569e-fd7d-4b95-9655-aeffa25d2dda","Type":"ContainerStarted","Data":"f3c88be9d27c9f0bc91a3e50cc760461271da47658545db6ec4c307675d68f12"} Feb 23 13:29:15.381525 master-0 kubenswrapper[26474]: I0223 13:29:15.381307 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8","Type":"ContainerDied","Data":"3b005c902abaee5787256e92ce8307a501f67f75f3fb827acdf6634a7fb4acf2"} Feb 23 13:29:15.381525 master-0 kubenswrapper[26474]: I0223 13:29:15.381392 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.381525 master-0 kubenswrapper[26474]: I0223 13:29:15.381506 26474 scope.go:117] "RemoveContainer" containerID="e32bda4630a090753107d70fbf4eef489e4d85c2a23a7d4934cf0b47e4f60ee8" Feb 23 13:29:15.415527 master-0 kubenswrapper[26474]: I0223 13:29:15.411303 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" podStartSLOduration=2.411247983 podStartE2EDuration="2.411247983s" podCreationTimestamp="2026-02-23 13:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:15.402540301 +0000 UTC m=+877.249047998" watchObservedRunningTime="2026-02-23 13:29:15.411247983 +0000 UTC m=+877.257755660" Feb 23 13:29:15.441577 master-0 kubenswrapper[26474]: I0223 13:29:15.439304 26474 scope.go:117] "RemoveContainer" containerID="7d164d3b19e62e442178275016798866803f4043cd6a260fa05f4cfd3c67da52" Feb 23 13:29:15.483238 master-0 kubenswrapper[26474]: I0223 13:29:15.483126 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:15.511234 master-0 kubenswrapper[26474]: I0223 13:29:15.511097 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:15.527464 master-0 kubenswrapper[26474]: I0223 13:29:15.527323 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:15.531726 master-0 kubenswrapper[26474]: E0223 13:29:15.531683 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="cinder-backup" Feb 23 13:29:15.531818 master-0 kubenswrapper[26474]: I0223 13:29:15.531720 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="cinder-backup" Feb 23 13:29:15.531818 master-0 kubenswrapper[26474]: E0223 13:29:15.531777 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="probe" Feb 23 13:29:15.531818 master-0 kubenswrapper[26474]: I0223 13:29:15.531789 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="probe" Feb 23 13:29:15.532480 master-0 kubenswrapper[26474]: I0223 13:29:15.532409 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="probe" Feb 23 13:29:15.532620 master-0 kubenswrapper[26474]: I0223 13:29:15.532567 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" containerName="cinder-backup" Feb 23 13:29:15.534569 master-0 kubenswrapper[26474]: I0223 13:29:15.534533 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.544021 master-0 kubenswrapper[26474]: I0223 13:29:15.543953 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:15.547612 master-0 kubenswrapper[26474]: I0223 13:29:15.547577 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-083a9-backup-config-data" Feb 23 13:29:15.669253 master-0 kubenswrapper[26474]: I0223 13:29:15.669101 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669253 master-0 kubenswrapper[26474]: I0223 13:29:15.669169 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz69h\" (UniqueName: \"kubernetes.io/projected/63ab4585-edb2-4419-a1ea-d84c96c68709-kube-api-access-jz69h\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669253 master-0 kubenswrapper[26474]: I0223 13:29:15.669256 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669292 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-sys\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669315 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669401 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669456 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669474 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-dev\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669508 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-run\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669579 master-0 kubenswrapper[26474]: I0223 13:29:15.669528 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669897 master-0 kubenswrapper[26474]: I0223 13:29:15.669611 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669897 master-0 kubenswrapper[26474]: I0223 13:29:15.669684 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.669897 master-0 kubenswrapper[26474]: I0223 13:29:15.669800 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.670017 master-0 kubenswrapper[26474]: I0223 13:29:15.669959 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.670150 master-0 kubenswrapper[26474]: I0223 13:29:15.670086 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772605 master-0 kubenswrapper[26474]: I0223 13:29:15.772505 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772605 master-0 kubenswrapper[26474]: I0223 13:29:15.772584 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-dev\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772638 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-run\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772687 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772715 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772781 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772827 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772883 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.772942 master-0 kubenswrapper[26474]: I0223 13:29:15.772933 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.772967 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.772993 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz69h\" (UniqueName: \"kubernetes.io/projected/63ab4585-edb2-4419-a1ea-d84c96c68709-kube-api-access-jz69h\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.773046 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.773085 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-sys\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.773132 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773249 master-0 kubenswrapper[26474]: I0223 13:29:15.773212 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773622 master-0 kubenswrapper[26474]: I0223 13:29:15.773560 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-lib-modules\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773754 master-0 kubenswrapper[26474]: I0223 13:29:15.773710 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-brick\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773804 master-0 kubenswrapper[26474]: I0223 13:29:15.773717 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-machine-id\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.773804 master-0 kubenswrapper[26474]: I0223 13:29:15.773778 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-lib-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774083 master-0 kubenswrapper[26474]: I0223 13:29:15.774052 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-sys\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774175 master-0 kubenswrapper[26474]: I0223 13:29:15.774147 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-nvme\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774547 master-0 kubenswrapper[26474]: I0223 13:29:15.774450 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-var-locks-cinder\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774547 master-0 kubenswrapper[26474]: I0223 13:29:15.774504 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-etc-iscsi\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774547 master-0 kubenswrapper[26474]: I0223 13:29:15.774527 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-dev\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.774705 master-0 kubenswrapper[26474]: I0223 13:29:15.774563 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63ab4585-edb2-4419-a1ea-d84c96c68709-run\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.784597 master-0 kubenswrapper[26474]: I0223 13:29:15.779219 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.784597 master-0 kubenswrapper[26474]: I0223 13:29:15.779811 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-scripts\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.784597 master-0 kubenswrapper[26474]: I0223 13:29:15.781543 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-combined-ca-bundle\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.784597 master-0 kubenswrapper[26474]: I0223 13:29:15.784032 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ab4585-edb2-4419-a1ea-d84c96c68709-config-data-custom\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.791535 master-0 kubenswrapper[26474]: I0223 13:29:15.791176 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz69h\" (UniqueName: \"kubernetes.io/projected/63ab4585-edb2-4419-a1ea-d84c96c68709-kube-api-access-jz69h\") pod \"cinder-083a9-backup-0\" (UID: \"63ab4585-edb2-4419-a1ea-d84c96c68709\") " pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:15.865716 master-0 kubenswrapper[26474]: I0223 13:29:15.865646 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:16.419612 master-0 kubenswrapper[26474]: I0223 13:29:16.419549 26474 generic.go:334] "Generic (PLEG): container finished" podID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerID="a78b3cb622ee3b208271d62b3ce2d26ba8cfa1321e5a7dda16561dac5987a158" exitCode=0 Feb 23 13:29:16.441120 master-0 kubenswrapper[26474]: I0223 13:29:16.440242 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cafd4cd-0870-40fb-97a2-30de667cd263" path="/var/lib/kubelet/pods/7cafd4cd-0870-40fb-97a2-30de667cd263/volumes" Feb 23 13:29:16.441120 master-0 kubenswrapper[26474]: I0223 13:29:16.440933 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8" path="/var/lib/kubelet/pods/aedc71b5-c5f8-4c3f-a9d5-3a939c84a3a8/volumes" Feb 23 13:29:16.441669 master-0 kubenswrapper[26474]: I0223 13:29:16.441636 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-gzjvk" event={"ID":"53c0fb4f-cbcb-4439-97c6-0b529f807785","Type":"ContainerDied","Data":"a78b3cb622ee3b208271d62b3ce2d26ba8cfa1321e5a7dda16561dac5987a158"} Feb 23 13:29:16.444030 master-0 kubenswrapper[26474]: I0223 13:29:16.443963 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"f75e3be8-7018-4e2a-a798-f6e7d9a972ef","Type":"ContainerStarted","Data":"6bb57137531017b59c1d18c824cfb80a9d4e4ca3532fa7226834948d95d138cf"} Feb 23 13:29:16.444030 master-0 kubenswrapper[26474]: I0223 13:29:16.444016 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"f75e3be8-7018-4e2a-a798-f6e7d9a972ef","Type":"ContainerStarted","Data":"7652e1331d8dcd91a24ecaa00ad1faf90a06ad71eb720159a178e401b5ce7b43"} Feb 23 13:29:16.513775 master-0 kubenswrapper[26474]: I0223 13:29:16.513612 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-083a9-backup-0"] Feb 23 13:29:17.464374 master-0 kubenswrapper[26474]: I0223 13:29:17.463910 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"63ab4585-edb2-4419-a1ea-d84c96c68709","Type":"ContainerStarted","Data":"a1cb2c42e166229db73801a6b6e0035e8a4302591a92a3f609e9bbb1ad0c39d3"} Feb 23 13:29:17.464374 master-0 kubenswrapper[26474]: I0223 13:29:17.464001 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"63ab4585-edb2-4419-a1ea-d84c96c68709","Type":"ContainerStarted","Data":"19271a746bb7e1228217215cd47cb825cba7ae432335f2afb0f6d45be85df92f"} Feb 23 13:29:17.464374 master-0 kubenswrapper[26474]: I0223 13:29:17.464017 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-backup-0" event={"ID":"63ab4585-edb2-4419-a1ea-d84c96c68709","Type":"ContainerStarted","Data":"38fc6875aa43e7480f87abfee7ef75b3cd71f123e7ccb5f2495b43f94f1d475a"} Feb 23 13:29:17.476363 master-0 kubenswrapper[26474]: I0223 13:29:17.472611 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-083a9-scheduler-0" event={"ID":"f75e3be8-7018-4e2a-a798-f6e7d9a972ef","Type":"ContainerStarted","Data":"b2c0e7e620999571bcf9639adafb74f489e7228b9c9e4f900d31b3b7eb3f327a"} Feb 23 13:29:17.500366 master-0 kubenswrapper[26474]: I0223 13:29:17.500105 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-backup-0" podStartSLOduration=2.500086941 podStartE2EDuration="2.500086941s" podCreationTimestamp="2026-02-23 13:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:17.49059238 +0000 UTC m=+879.337100057" watchObservedRunningTime="2026-02-23 13:29:17.500086941 +0000 UTC m=+879.346594618" Feb 23 13:29:17.525367 master-0 kubenswrapper[26474]: I0223 13:29:17.525175 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-083a9-scheduler-0" podStartSLOduration=3.525157492 podStartE2EDuration="3.525157492s" podCreationTimestamp="2026-02-23 13:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:17.517857824 +0000 UTC m=+879.364365521" watchObservedRunningTime="2026-02-23 13:29:17.525157492 +0000 UTC m=+879.371665169" Feb 23 13:29:18.003403 master-0 kubenswrapper[26474]: I0223 13:29:18.003356 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:29:18.061833 master-0 kubenswrapper[26474]: I0223 13:29:18.061772 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.061833 master-0 kubenswrapper[26474]: I0223 13:29:18.061842 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.062047 master-0 kubenswrapper[26474]: I0223 13:29:18.061960 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xvd4\" (UniqueName: \"kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.062047 master-0 kubenswrapper[26474]: I0223 13:29:18.061991 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.062134 master-0 kubenswrapper[26474]: I0223 13:29:18.062048 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.062134 master-0 kubenswrapper[26474]: I0223 13:29:18.062108 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged\") pod \"53c0fb4f-cbcb-4439-97c6-0b529f807785\" (UID: \"53c0fb4f-cbcb-4439-97c6-0b529f807785\") " Feb 23 13:29:18.063994 master-0 kubenswrapper[26474]: I0223 13:29:18.063957 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:18.076278 master-0 kubenswrapper[26474]: I0223 13:29:18.073995 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4" (OuterVolumeSpecName: "kube-api-access-8xvd4") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "kube-api-access-8xvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:18.076278 master-0 kubenswrapper[26474]: I0223 13:29:18.074086 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:29:18.095250 master-0 kubenswrapper[26474]: I0223 13:29:18.095159 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts" (OuterVolumeSpecName: "scripts") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:18.144528 master-0 kubenswrapper[26474]: I0223 13:29:18.144405 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:18.165665 master-0 kubenswrapper[26474]: I0223 13:29:18.165595 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.165665 master-0 kubenswrapper[26474]: I0223 13:29:18.165660 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.165892 master-0 kubenswrapper[26474]: I0223 13:29:18.165679 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.165892 master-0 kubenswrapper[26474]: I0223 13:29:18.165691 26474 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/53c0fb4f-cbcb-4439-97c6-0b529f807785-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.165892 master-0 kubenswrapper[26474]: I0223 13:29:18.165703 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xvd4\" (UniqueName: \"kubernetes.io/projected/53c0fb4f-cbcb-4439-97c6-0b529f807785-kube-api-access-8xvd4\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.177580 master-0 kubenswrapper[26474]: I0223 13:29:18.177514 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data" (OuterVolumeSpecName: "config-data") pod "53c0fb4f-cbcb-4439-97c6-0b529f807785" (UID: "53c0fb4f-cbcb-4439-97c6-0b529f807785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:18.267018 master-0 kubenswrapper[26474]: I0223 13:29:18.266952 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c0fb4f-cbcb-4439-97c6-0b529f807785-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:18.535471 master-0 kubenswrapper[26474]: I0223 13:29:18.535402 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-gzjvk" Feb 23 13:29:18.538123 master-0 kubenswrapper[26474]: I0223 13:29:18.538087 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-gzjvk" event={"ID":"53c0fb4f-cbcb-4439-97c6-0b529f807785","Type":"ContainerDied","Data":"e8e1a0e5ec671bdbb760aca97579b107f177c033b1bc20448020ca72b508b8b6"} Feb 23 13:29:18.538284 master-0 kubenswrapper[26474]: I0223 13:29:18.538270 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e1a0e5ec671bdbb760aca97579b107f177c033b1bc20448020ca72b508b8b6" Feb 23 13:29:19.068369 master-0 kubenswrapper[26474]: I0223 13:29:19.065147 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:19.075972 master-0 kubenswrapper[26474]: I0223 13:29:19.075897 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-v4ftr"] Feb 23 13:29:19.076687 master-0 kubenswrapper[26474]: E0223 13:29:19.076472 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerName="init" Feb 23 13:29:19.076687 master-0 kubenswrapper[26474]: I0223 13:29:19.076492 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerName="init" Feb 23 13:29:19.076687 master-0 kubenswrapper[26474]: E0223 13:29:19.076527 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerName="ironic-db-sync" Feb 23 13:29:19.076687 master-0 kubenswrapper[26474]: I0223 13:29:19.076533 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerName="ironic-db-sync" Feb 23 13:29:19.076983 master-0 kubenswrapper[26474]: I0223 13:29:19.076776 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c0fb4f-cbcb-4439-97c6-0b529f807785" containerName="ironic-db-sync" Feb 23 13:29:19.084803 master-0 kubenswrapper[26474]: I0223 13:29:19.084110 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.149188 master-0 kubenswrapper[26474]: I0223 13:29:19.133362 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-v4ftr"] Feb 23 13:29:19.250758 master-0 kubenswrapper[26474]: I0223 13:29:19.246992 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.250758 master-0 kubenswrapper[26474]: I0223 13:29:19.247069 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkdw\" (UniqueName: \"kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.303472 master-0 kubenswrapper[26474]: I0223 13:29:19.303415 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-d172-account-create-update-hzwp2"] Feb 23 13:29:19.355186 master-0 kubenswrapper[26474]: I0223 13:29:19.355117 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkdw\" (UniqueName: \"kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.355641 master-0 kubenswrapper[26474]: I0223 13:29:19.355465 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.356310 master-0 kubenswrapper[26474]: I0223 13:29:19.356277 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.363138 master-0 kubenswrapper[26474]: I0223 13:29:19.363067 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-d172-account-create-update-hzwp2"] Feb 23 13:29:19.364429 master-0 kubenswrapper[26474]: I0223 13:29:19.364398 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.394934 master-0 kubenswrapper[26474]: I0223 13:29:19.387076 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 23 13:29:19.440452 master-0 kubenswrapper[26474]: I0223 13:29:19.432983 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:29:19.440452 master-0 kubenswrapper[26474]: I0223 13:29:19.438547 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.449772 master-0 kubenswrapper[26474]: I0223 13:29:19.446774 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:29:19.450609 master-0 kubenswrapper[26474]: I0223 13:29:19.450555 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkdw\" (UniqueName: \"kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw\") pod \"ironic-inspector-db-create-v4ftr\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.465364 master-0 kubenswrapper[26474]: I0223 13:29:19.457559 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-7d6b446974-djn5h"] Feb 23 13:29:19.465364 master-0 kubenswrapper[26474]: I0223 13:29:19.460547 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.483430 master-0 kubenswrapper[26474]: I0223 13:29:19.474438 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.483430 master-0 kubenswrapper[26474]: I0223 13:29:19.474514 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mzm\" (UniqueName: \"kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.483430 master-0 kubenswrapper[26474]: I0223 13:29:19.482331 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 23 13:29:19.510712 master-0 kubenswrapper[26474]: I0223 13:29:19.510310 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:19.527898 master-0 kubenswrapper[26474]: I0223 13:29:19.527812 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-volume-lvm-iscsi-0" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577035 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvczd\" (UniqueName: \"kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577112 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-combined-ca-bundle\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577141 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577165 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577183 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mzm\" (UniqueName: \"kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577258 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577276 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577312 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577329 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-config\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577459 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.577496 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nthw\" (UniqueName: \"kubernetes.io/projected/7af5b404-f4af-4e67-b355-916c6240db47-kube-api-access-4nthw\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.583486 master-0 kubenswrapper[26474]: I0223 13:29:19.579125 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.651549 master-0 kubenswrapper[26474]: I0223 13:29:19.649981 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7d6b446974-djn5h"] Feb 23 13:29:19.671396 master-0 kubenswrapper[26474]: I0223 13:29:19.668948 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mzm\" (UniqueName: \"kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm\") pod \"ironic-inspector-d172-account-create-update-hzwp2\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.704441 master-0 kubenswrapper[26474]: I0223 13:29:19.704271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-combined-ca-bundle\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.704441 master-0 kubenswrapper[26474]: I0223 13:29:19.704391 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.704661 master-0 kubenswrapper[26474]: I0223 13:29:19.704596 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.704661 master-0 kubenswrapper[26474]: I0223 13:29:19.704620 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.704730 master-0 kubenswrapper[26474]: I0223 13:29:19.704666 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.704730 master-0 kubenswrapper[26474]: I0223 13:29:19.704688 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-config\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.704816 master-0 kubenswrapper[26474]: I0223 13:29:19.704793 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.704884 master-0 kubenswrapper[26474]: I0223 13:29:19.704871 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nthw\" (UniqueName: \"kubernetes.io/projected/7af5b404-f4af-4e67-b355-916c6240db47-kube-api-access-4nthw\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.705175 master-0 kubenswrapper[26474]: I0223 13:29:19.705108 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvczd\" (UniqueName: \"kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.713121 master-0 kubenswrapper[26474]: I0223 13:29:19.713047 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-config\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.714543 master-0 kubenswrapper[26474]: I0223 13:29:19.713978 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af5b404-f4af-4e67-b355-916c6240db47-combined-ca-bundle\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.714810 master-0 kubenswrapper[26474]: I0223 13:29:19.714761 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.714973 master-0 kubenswrapper[26474]: I0223 13:29:19.714936 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.715678 master-0 kubenswrapper[26474]: I0223 13:29:19.715639 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.725450 master-0 kubenswrapper[26474]: I0223 13:29:19.725382 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.726031 master-0 kubenswrapper[26474]: I0223 13:29:19.725983 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.745828 master-0 kubenswrapper[26474]: I0223 13:29:19.745762 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvczd\" (UniqueName: \"kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd\") pod \"dnsmasq-dns-5b7ff48b5-8ngpr\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:19.761457 master-0 kubenswrapper[26474]: I0223 13:29:19.761389 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nthw\" (UniqueName: \"kubernetes.io/projected/7af5b404-f4af-4e67-b355-916c6240db47-kube-api-access-4nthw\") pod \"ironic-neutron-agent-7d6b446974-djn5h\" (UID: \"7af5b404-f4af-4e67-b355-916c6240db47\") " pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:19.779060 master-0 kubenswrapper[26474]: I0223 13:29:19.778979 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:19.782328 master-0 kubenswrapper[26474]: I0223 13:29:19.782278 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.790478 master-0 kubenswrapper[26474]: I0223 13:29:19.790417 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 23 13:29:19.790774 master-0 kubenswrapper[26474]: I0223 13:29:19.790742 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 23 13:29:19.790947 master-0 kubenswrapper[26474]: I0223 13:29:19.790910 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 23 13:29:19.791120 master-0 kubenswrapper[26474]: I0223 13:29:19.791088 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 23 13:29:19.791249 master-0 kubenswrapper[26474]: I0223 13:29:19.791225 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 13:29:19.812503 master-0 kubenswrapper[26474]: I0223 13:29:19.811328 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:19.876266 master-0 kubenswrapper[26474]: I0223 13:29:19.876163 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:19.891964 master-0 kubenswrapper[26474]: I0223 13:29:19.891870 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-083a9-api-0" Feb 23 13:29:19.913283 master-0 kubenswrapper[26474]: I0223 13:29:19.912582 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.914895 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.914944 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.914964 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjzl\" (UniqueName: \"kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.915052 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.915095 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.915139 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.915172 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.915231 master-0 kubenswrapper[26474]: I0223 13:29:19.915190 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:19.946209 master-0 kubenswrapper[26474]: I0223 13:29:19.946142 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:20.000513 master-0 kubenswrapper[26474]: I0223 13:29:20.000047 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:20.017967 master-0 kubenswrapper[26474]: I0223 13:29:20.017818 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.017967 master-0 kubenswrapper[26474]: I0223 13:29:20.017906 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018078 master-0 kubenswrapper[26474]: I0223 13:29:20.017974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018078 master-0 kubenswrapper[26474]: I0223 13:29:20.018012 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018078 master-0 kubenswrapper[26474]: I0223 13:29:20.018034 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018162 master-0 kubenswrapper[26474]: I0223 13:29:20.018141 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018195 master-0 kubenswrapper[26474]: I0223 13:29:20.018163 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.018195 master-0 kubenswrapper[26474]: I0223 13:29:20.018180 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjzl\" (UniqueName: \"kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.020538 master-0 kubenswrapper[26474]: I0223 13:29:20.020210 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.021000 master-0 kubenswrapper[26474]: I0223 13:29:20.020909 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.034819 master-0 kubenswrapper[26474]: I0223 13:29:20.029586 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.035599 master-0 kubenswrapper[26474]: I0223 13:29:20.035558 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.035814 master-0 kubenswrapper[26474]: I0223 13:29:20.035791 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.036294 master-0 kubenswrapper[26474]: I0223 13:29:20.036278 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.075916 master-0 kubenswrapper[26474]: I0223 13:29:20.036825 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.190113 master-0 kubenswrapper[26474]: I0223 13:29:20.190051 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjzl\" (UniqueName: \"kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl\") pod \"ironic-6cf8cbb6b7-ll62g\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.225966 master-0 kubenswrapper[26474]: I0223 13:29:20.222444 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:29:20.373748 master-0 kubenswrapper[26474]: I0223 13:29:20.371472 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:20.539643 master-0 kubenswrapper[26474]: I0223 13:29:20.539549 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-v4ftr"] Feb 23 13:29:20.769160 master-0 kubenswrapper[26474]: I0223 13:29:20.768264 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-v4ftr" event={"ID":"2417f017-58fa-40a9-bd0a-ac6557cadd27","Type":"ContainerStarted","Data":"df75897a7f5227800e1e18fa218c163d5f75a49106a5da5434d570b388aa200a"} Feb 23 13:29:20.799048 master-0 kubenswrapper[26474]: I0223 13:29:20.794672 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:29:20.869084 master-0 kubenswrapper[26474]: I0223 13:29:20.868755 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:20.995603 master-0 kubenswrapper[26474]: I0223 13:29:20.995070 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-d172-account-create-update-hzwp2"] Feb 23 13:29:21.020190 master-0 kubenswrapper[26474]: I0223 13:29:21.020130 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:29:21.156103 master-0 kubenswrapper[26474]: I0223 13:29:21.155647 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:29:21.231954 master-0 kubenswrapper[26474]: I0223 13:29:21.231896 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 23 13:29:21.260016 master-0 kubenswrapper[26474]: I0223 13:29:21.259953 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 23 13:29:21.260256 master-0 kubenswrapper[26474]: I0223 13:29:21.260034 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c5c8b54d6-wsqw8" Feb 23 13:29:21.260256 master-0 kubenswrapper[26474]: I0223 13:29:21.260154 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 23 13:29:21.264654 master-0 kubenswrapper[26474]: I0223 13:29:21.264619 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 23 13:29:21.273274 master-0 kubenswrapper[26474]: I0223 13:29:21.273215 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 23 13:29:21.293890 master-0 kubenswrapper[26474]: I0223 13:29:21.293739 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7d6b446974-djn5h"] Feb 23 13:29:21.324361 master-0 kubenswrapper[26474]: I0223 13:29:21.323577 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:21.411067 master-0 kubenswrapper[26474]: W0223 13:29:21.410369 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c6e3b6_c201_44f9_9100_819b15b552f4.slice/crio-c7ecd6fc9cdb0f5c62f1170597ba70d4c7be68eeb98f7776f19c8736c3d8953d WatchSource:0}: Error finding container c7ecd6fc9cdb0f5c62f1170597ba70d4c7be68eeb98f7776f19c8736c3d8953d: Status 404 returned error can't find the container with id c7ecd6fc9cdb0f5c62f1170597ba70d4c7be68eeb98f7776f19c8736c3d8953d Feb 23 13:29:21.435631 master-0 kubenswrapper[26474]: I0223 13:29:21.435443 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:29:21.450128 master-0 kubenswrapper[26474]: I0223 13:29:21.449492 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f6c5d9dcc-k6qfz" Feb 23 13:29:21.465791 master-0 kubenswrapper[26474]: I0223 13:29:21.465629 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.465791 master-0 kubenswrapper[26474]: I0223 13:29:21.465775 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-scripts\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466028 master-0 kubenswrapper[26474]: I0223 13:29:21.465864 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466028 master-0 kubenswrapper[26474]: I0223 13:29:21.465987 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/808fa98d-dace-4799-9059-a26510355d62-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466111 master-0 kubenswrapper[26474]: I0223 13:29:21.466054 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn24d\" (UniqueName: \"kubernetes.io/projected/808fa98d-dace-4799-9059-a26510355d62-kube-api-access-gn24d\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466111 master-0 kubenswrapper[26474]: I0223 13:29:21.466098 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65153d03-ab97-4b98-a1f0-88b70a37b728\" (UniqueName: \"kubernetes.io/csi/topolvm.io^082656f8-c6b6-4a37-b021-3e3046c1277b\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466176 master-0 kubenswrapper[26474]: I0223 13:29:21.466129 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.466279 master-0 kubenswrapper[26474]: I0223 13:29:21.466261 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/808fa98d-dace-4799-9059-a26510355d62-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.569754 master-0 kubenswrapper[26474]: I0223 13:29:21.569023 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.569754 master-0 kubenswrapper[26474]: I0223 13:29:21.569661 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-scripts\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.569754 master-0 kubenswrapper[26474]: I0223 13:29:21.569707 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.570020 master-0 kubenswrapper[26474]: I0223 13:29:21.569767 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/808fa98d-dace-4799-9059-a26510355d62-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.570020 master-0 kubenswrapper[26474]: I0223 13:29:21.569802 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn24d\" (UniqueName: \"kubernetes.io/projected/808fa98d-dace-4799-9059-a26510355d62-kube-api-access-gn24d\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.570020 master-0 kubenswrapper[26474]: I0223 13:29:21.569829 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65153d03-ab97-4b98-a1f0-88b70a37b728\" (UniqueName: \"kubernetes.io/csi/topolvm.io^082656f8-c6b6-4a37-b021-3e3046c1277b\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.570020 master-0 kubenswrapper[26474]: I0223 13:29:21.569870 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.570020 master-0 kubenswrapper[26474]: I0223 13:29:21.569916 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/808fa98d-dace-4799-9059-a26510355d62-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.571303 master-0 kubenswrapper[26474]: I0223 13:29:21.570419 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/808fa98d-dace-4799-9059-a26510355d62-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.573758 master-0 kubenswrapper[26474]: I0223 13:29:21.573626 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.578939 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.578998 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65153d03-ab97-4b98-a1f0-88b70a37b728\" (UniqueName: \"kubernetes.io/csi/topolvm.io^082656f8-c6b6-4a37-b021-3e3046c1277b\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/729e637e219344add5430f2798d66c02c21ac982b009bfc350168bf27a9d8663/globalmount\"" pod="openstack/ironic-conductor-0" Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.579429 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/808fa98d-dace-4799-9059-a26510355d62-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.580178 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.583442 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-scripts\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.588542 master-0 kubenswrapper[26474]: I0223 13:29:21.586279 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808fa98d-dace-4799-9059-a26510355d62-config-data\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.592520 master-0 kubenswrapper[26474]: I0223 13:29:21.592280 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn24d\" (UniqueName: \"kubernetes.io/projected/808fa98d-dace-4799-9059-a26510355d62-kube-api-access-gn24d\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:21.789923 master-0 kubenswrapper[26474]: I0223 13:29:21.789815 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" event={"ID":"d97872e6-b11d-4f3c-b9b9-65814a655637","Type":"ContainerStarted","Data":"d06465e430d091585e71c1896a612e397d2589314c0d52a54baad7de4ab0ff64"} Feb 23 13:29:21.789923 master-0 kubenswrapper[26474]: I0223 13:29:21.789877 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" event={"ID":"d97872e6-b11d-4f3c-b9b9-65814a655637","Type":"ContainerStarted","Data":"cc71708f5336c39685af199d5a1e5cc0736fbc5c6577c21e1d352e0ca1fab96c"} Feb 23 13:29:21.797568 master-0 kubenswrapper[26474]: I0223 13:29:21.796671 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerStarted","Data":"dd6be0a408de12957d632601003b5b6c81d82387a588c9be930d1d58bfc561b3"} Feb 23 13:29:21.800031 master-0 kubenswrapper[26474]: I0223 13:29:21.799943 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" event={"ID":"8d57ee11-aa24-43e7-a712-03b2b12220d1","Type":"ContainerStarted","Data":"03629aef69a27ad3433a9da5cf1beb4a3bd8bf1a60e6239b2ae23390485f9a58"} Feb 23 13:29:21.801068 master-0 kubenswrapper[26474]: I0223 13:29:21.801011 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerStarted","Data":"c7ecd6fc9cdb0f5c62f1170597ba70d4c7be68eeb98f7776f19c8736c3d8953d"} Feb 23 13:29:21.804372 master-0 kubenswrapper[26474]: I0223 13:29:21.804327 26474 generic.go:334] "Generic (PLEG): container finished" podID="2417f017-58fa-40a9-bd0a-ac6557cadd27" containerID="e2f23f9c9106bdebc9ab8834949bf76dc9b921857d21c0d45a170fa3604b0c31" exitCode=0 Feb 23 13:29:21.804666 master-0 kubenswrapper[26474]: I0223 13:29:21.804639 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66dfd5f7c4-jsbfx" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-log" containerID="cri-o://f6d72a61e386612847d6ee612fc2a5240c7e5b0617d2e237633addef58284877" gracePeriod=30 Feb 23 13:29:21.805011 master-0 kubenswrapper[26474]: I0223 13:29:21.804986 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-v4ftr" event={"ID":"2417f017-58fa-40a9-bd0a-ac6557cadd27","Type":"ContainerDied","Data":"e2f23f9c9106bdebc9ab8834949bf76dc9b921857d21c0d45a170fa3604b0c31"} Feb 23 13:29:21.805881 master-0 kubenswrapper[26474]: I0223 13:29:21.805848 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-66dfd5f7c4-jsbfx" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-api" containerID="cri-o://1527ec91ea674eff5ff80556700585faa0273171d38d35b5a317e4f94cfb868c" gracePeriod=30 Feb 23 13:29:21.843986 master-0 kubenswrapper[26474]: I0223 13:29:21.843784 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" podStartSLOduration=2.843751664 podStartE2EDuration="2.843751664s" podCreationTimestamp="2026-02-23 13:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:21.809515759 +0000 UTC m=+883.656023456" watchObservedRunningTime="2026-02-23 13:29:21.843751664 +0000 UTC m=+883.690259341" Feb 23 13:29:22.819480 master-0 kubenswrapper[26474]: I0223 13:29:22.819386 26474 generic.go:334] "Generic (PLEG): container finished" podID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerID="73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43" exitCode=0 Feb 23 13:29:22.826791 master-0 kubenswrapper[26474]: I0223 13:29:22.819513 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" event={"ID":"8d57ee11-aa24-43e7-a712-03b2b12220d1","Type":"ContainerDied","Data":"73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43"} Feb 23 13:29:22.826791 master-0 kubenswrapper[26474]: I0223 13:29:22.822530 26474 generic.go:334] "Generic (PLEG): container finished" podID="d97872e6-b11d-4f3c-b9b9-65814a655637" containerID="d06465e430d091585e71c1896a612e397d2589314c0d52a54baad7de4ab0ff64" exitCode=0 Feb 23 13:29:22.826791 master-0 kubenswrapper[26474]: I0223 13:29:22.822637 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" event={"ID":"d97872e6-b11d-4f3c-b9b9-65814a655637","Type":"ContainerDied","Data":"d06465e430d091585e71c1896a612e397d2589314c0d52a54baad7de4ab0ff64"} Feb 23 13:29:22.827812 master-0 kubenswrapper[26474]: I0223 13:29:22.826786 26474 generic.go:334] "Generic (PLEG): container finished" podID="9725d464-c206-407c-9b4c-983607fe63d1" containerID="f6d72a61e386612847d6ee612fc2a5240c7e5b0617d2e237633addef58284877" exitCode=143 Feb 23 13:29:22.827812 master-0 kubenswrapper[26474]: I0223 13:29:22.827003 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerDied","Data":"f6d72a61e386612847d6ee612fc2a5240c7e5b0617d2e237633addef58284877"} Feb 23 13:29:23.025031 master-0 kubenswrapper[26474]: I0223 13:29:23.024950 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65153d03-ab97-4b98-a1f0-88b70a37b728\" (UniqueName: \"kubernetes.io/csi/topolvm.io^082656f8-c6b6-4a37-b021-3e3046c1277b\") pod \"ironic-conductor-0\" (UID: \"808fa98d-dace-4799-9059-a26510355d62\") " pod="openstack/ironic-conductor-0" Feb 23 13:29:23.128317 master-0 kubenswrapper[26474]: I0223 13:29:23.128152 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 23 13:29:23.331303 master-0 kubenswrapper[26474]: I0223 13:29:23.331259 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:23.492378 master-0 kubenswrapper[26474]: I0223 13:29:23.490276 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts\") pod \"2417f017-58fa-40a9-bd0a-ac6557cadd27\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " Feb 23 13:29:23.492378 master-0 kubenswrapper[26474]: I0223 13:29:23.490434 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkdw\" (UniqueName: \"kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw\") pod \"2417f017-58fa-40a9-bd0a-ac6557cadd27\" (UID: \"2417f017-58fa-40a9-bd0a-ac6557cadd27\") " Feb 23 13:29:23.492378 master-0 kubenswrapper[26474]: I0223 13:29:23.490799 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2417f017-58fa-40a9-bd0a-ac6557cadd27" (UID: "2417f017-58fa-40a9-bd0a-ac6557cadd27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:23.492378 master-0 kubenswrapper[26474]: I0223 13:29:23.491821 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2417f017-58fa-40a9-bd0a-ac6557cadd27-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:23.505408 master-0 kubenswrapper[26474]: I0223 13:29:23.505213 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw" (OuterVolumeSpecName: "kube-api-access-2pkdw") pod "2417f017-58fa-40a9-bd0a-ac6557cadd27" (UID: "2417f017-58fa-40a9-bd0a-ac6557cadd27"). InnerVolumeSpecName "kube-api-access-2pkdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:23.597242 master-0 kubenswrapper[26474]: I0223 13:29:23.596885 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pkdw\" (UniqueName: \"kubernetes.io/projected/2417f017-58fa-40a9-bd0a-ac6557cadd27-kube-api-access-2pkdw\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:23.927291 master-0 kubenswrapper[26474]: I0223 13:29:23.927207 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-c7db585c-7gj9c"] Feb 23 13:29:23.928236 master-0 kubenswrapper[26474]: E0223 13:29:23.927866 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2417f017-58fa-40a9-bd0a-ac6557cadd27" containerName="mariadb-database-create" Feb 23 13:29:23.928236 master-0 kubenswrapper[26474]: I0223 13:29:23.927882 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="2417f017-58fa-40a9-bd0a-ac6557cadd27" containerName="mariadb-database-create" Feb 23 13:29:23.928236 master-0 kubenswrapper[26474]: I0223 13:29:23.928117 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="2417f017-58fa-40a9-bd0a-ac6557cadd27" containerName="mariadb-database-create" Feb 23 13:29:23.938469 master-0 kubenswrapper[26474]: I0223 13:29:23.932608 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-v4ftr" Feb 23 13:29:23.938469 master-0 kubenswrapper[26474]: I0223 13:29:23.932766 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-v4ftr" event={"ID":"2417f017-58fa-40a9-bd0a-ac6557cadd27","Type":"ContainerDied","Data":"df75897a7f5227800e1e18fa218c163d5f75a49106a5da5434d570b388aa200a"} Feb 23 13:29:23.938469 master-0 kubenswrapper[26474]: I0223 13:29:23.932848 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df75897a7f5227800e1e18fa218c163d5f75a49106a5da5434d570b388aa200a" Feb 23 13:29:23.938469 master-0 kubenswrapper[26474]: I0223 13:29:23.933053 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:23.965405 master-0 kubenswrapper[26474]: I0223 13:29:23.941825 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 23 13:29:23.965405 master-0 kubenswrapper[26474]: I0223 13:29:23.954640 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 23 13:29:23.982804 master-0 kubenswrapper[26474]: I0223 13:29:23.971514 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c7db585c-7gj9c"] Feb 23 13:29:23.982804 master-0 kubenswrapper[26474]: I0223 13:29:23.972836 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" event={"ID":"8d57ee11-aa24-43e7-a712-03b2b12220d1","Type":"ContainerStarted","Data":"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4"} Feb 23 13:29:23.982804 master-0 kubenswrapper[26474]: I0223 13:29:23.972977 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:24.017390 master-0 kubenswrapper[26474]: I0223 13:29:24.011426 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.028508 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-internal-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.029585 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-public-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.029643 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-custom\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.031515 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4671c55a-f3de-4d2d-8717-452ee80d2691-etc-podinfo\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.031583 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-logs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.031818 master-0 kubenswrapper[26474]: I0223 13:29:24.031654 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.032875 master-0 kubenswrapper[26474]: I0223 13:29:24.032194 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-scripts\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.032875 master-0 kubenswrapper[26474]: I0223 13:29:24.032251 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-merged\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.032875 master-0 kubenswrapper[26474]: I0223 13:29:24.032698 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-combined-ca-bundle\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.033028 master-0 kubenswrapper[26474]: I0223 13:29:24.032906 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt88r\" (UniqueName: \"kubernetes.io/projected/4671c55a-f3de-4d2d-8717-452ee80d2691-kube-api-access-qt88r\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.092132 master-0 kubenswrapper[26474]: I0223 13:29:24.092026 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" podStartSLOduration=5.092000148 podStartE2EDuration="5.092000148s" podCreationTimestamp="2026-02-23 13:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:24.077365961 +0000 UTC m=+885.923873648" watchObservedRunningTime="2026-02-23 13:29:24.092000148 +0000 UTC m=+885.938507825" Feb 23 13:29:24.135953 master-0 kubenswrapper[26474]: I0223 13:29:24.135899 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-internal-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.136202 master-0 kubenswrapper[26474]: I0223 13:29:24.135973 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-public-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.136202 master-0 kubenswrapper[26474]: I0223 13:29:24.136007 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-custom\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.136815 master-0 kubenswrapper[26474]: I0223 13:29:24.136777 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4671c55a-f3de-4d2d-8717-452ee80d2691-etc-podinfo\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.136815 master-0 kubenswrapper[26474]: I0223 13:29:24.136812 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-logs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.136969 master-0 kubenswrapper[26474]: I0223 13:29:24.136886 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.137652 master-0 kubenswrapper[26474]: I0223 13:29:24.137047 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-scripts\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.137652 master-0 kubenswrapper[26474]: I0223 13:29:24.137271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-merged\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.137652 master-0 kubenswrapper[26474]: I0223 13:29:24.137492 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-combined-ca-bundle\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.137652 master-0 kubenswrapper[26474]: I0223 13:29:24.137613 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt88r\" (UniqueName: \"kubernetes.io/projected/4671c55a-f3de-4d2d-8717-452ee80d2691-kube-api-access-qt88r\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.138120 master-0 kubenswrapper[26474]: I0223 13:29:24.138054 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-merged\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.138424 master-0 kubenswrapper[26474]: I0223 13:29:24.138401 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4671c55a-f3de-4d2d-8717-452ee80d2691-logs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.143071 master-0 kubenswrapper[26474]: I0223 13:29:24.143035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-public-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.143530 master-0 kubenswrapper[26474]: I0223 13:29:24.143485 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-internal-tls-certs\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.143651 master-0 kubenswrapper[26474]: I0223 13:29:24.143539 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-scripts\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.145677 master-0 kubenswrapper[26474]: I0223 13:29:24.145654 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.145810 master-0 kubenswrapper[26474]: I0223 13:29:24.145786 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-config-data-custom\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.146996 master-0 kubenswrapper[26474]: I0223 13:29:24.146945 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4671c55a-f3de-4d2d-8717-452ee80d2691-combined-ca-bundle\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.156591 master-0 kubenswrapper[26474]: I0223 13:29:24.156246 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4671c55a-f3de-4d2d-8717-452ee80d2691-etc-podinfo\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.159599 master-0 kubenswrapper[26474]: I0223 13:29:24.157936 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt88r\" (UniqueName: \"kubernetes.io/projected/4671c55a-f3de-4d2d-8717-452ee80d2691-kube-api-access-qt88r\") pod \"ironic-c7db585c-7gj9c\" (UID: \"4671c55a-f3de-4d2d-8717-452ee80d2691\") " pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.302695 master-0 kubenswrapper[26474]: I0223 13:29:24.302444 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:24.527937 master-0 kubenswrapper[26474]: W0223 13:29:24.527856 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod808fa98d_dace_4799_9059_a26510355d62.slice/crio-95310ba005eb7849433b809642cb9150ed130569af6be4991cf373a1e8887ac5 WatchSource:0}: Error finding container 95310ba005eb7849433b809642cb9150ed130569af6be4991cf373a1e8887ac5: Status 404 returned error can't find the container with id 95310ba005eb7849433b809642cb9150ed130569af6be4991cf373a1e8887ac5 Feb 23 13:29:24.640750 master-0 kubenswrapper[26474]: I0223 13:29:24.639887 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:24.804965 master-0 kubenswrapper[26474]: I0223 13:29:24.798440 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts\") pod \"d97872e6-b11d-4f3c-b9b9-65814a655637\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " Feb 23 13:29:24.804965 master-0 kubenswrapper[26474]: I0223 13:29:24.798724 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mzm\" (UniqueName: \"kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm\") pod \"d97872e6-b11d-4f3c-b9b9-65814a655637\" (UID: \"d97872e6-b11d-4f3c-b9b9-65814a655637\") " Feb 23 13:29:24.804965 master-0 kubenswrapper[26474]: I0223 13:29:24.802930 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm" (OuterVolumeSpecName: "kube-api-access-r4mzm") pod "d97872e6-b11d-4f3c-b9b9-65814a655637" (UID: "d97872e6-b11d-4f3c-b9b9-65814a655637"). InnerVolumeSpecName "kube-api-access-r4mzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:24.804965 master-0 kubenswrapper[26474]: I0223 13:29:24.803311 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d97872e6-b11d-4f3c-b9b9-65814a655637" (UID: "d97872e6-b11d-4f3c-b9b9-65814a655637"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:24.902180 master-0 kubenswrapper[26474]: I0223 13:29:24.902098 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mzm\" (UniqueName: \"kubernetes.io/projected/d97872e6-b11d-4f3c-b9b9-65814a655637-kube-api-access-r4mzm\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:24.902180 master-0 kubenswrapper[26474]: I0223 13:29:24.902154 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97872e6-b11d-4f3c-b9b9-65814a655637-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:24.996009 master-0 kubenswrapper[26474]: I0223 13:29:24.994537 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" event={"ID":"d97872e6-b11d-4f3c-b9b9-65814a655637","Type":"ContainerDied","Data":"cc71708f5336c39685af199d5a1e5cc0736fbc5c6577c21e1d352e0ca1fab96c"} Feb 23 13:29:24.996009 master-0 kubenswrapper[26474]: I0223 13:29:24.994590 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc71708f5336c39685af199d5a1e5cc0736fbc5c6577c21e1d352e0ca1fab96c" Feb 23 13:29:24.996009 master-0 kubenswrapper[26474]: I0223 13:29:24.994647 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-d172-account-create-update-hzwp2" Feb 23 13:29:25.006838 master-0 kubenswrapper[26474]: I0223 13:29:25.005210 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"95310ba005eb7849433b809642cb9150ed130569af6be4991cf373a1e8887ac5"} Feb 23 13:29:25.058416 master-0 kubenswrapper[26474]: I0223 13:29:25.056594 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-scheduler-0" Feb 23 13:29:25.161481 master-0 kubenswrapper[26474]: I0223 13:29:25.161407 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c7db585c-7gj9c"] Feb 23 13:29:25.568163 master-0 kubenswrapper[26474]: W0223 13:29:25.568051 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4671c55a_f3de_4d2d_8717_452ee80d2691.slice/crio-1df36a3f6b3224ab80eafe7cbfd8c51b9056467e510c83fe19a89e7cfe9cd86d WatchSource:0}: Error finding container 1df36a3f6b3224ab80eafe7cbfd8c51b9056467e510c83fe19a89e7cfe9cd86d: Status 404 returned error can't find the container with id 1df36a3f6b3224ab80eafe7cbfd8c51b9056467e510c83fe19a89e7cfe9cd86d Feb 23 13:29:26.016438 master-0 kubenswrapper[26474]: I0223 13:29:26.016315 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c7db585c-7gj9c" event={"ID":"4671c55a-f3de-4d2d-8717-452ee80d2691","Type":"ContainerStarted","Data":"1df36a3f6b3224ab80eafe7cbfd8c51b9056467e510c83fe19a89e7cfe9cd86d"} Feb 23 13:29:26.019579 master-0 kubenswrapper[26474]: I0223 13:29:26.018145 26474 generic.go:334] "Generic (PLEG): container finished" podID="9725d464-c206-407c-9b4c-983607fe63d1" containerID="1527ec91ea674eff5ff80556700585faa0273171d38d35b5a317e4f94cfb868c" exitCode=0 Feb 23 13:29:26.019579 master-0 kubenswrapper[26474]: I0223 13:29:26.018199 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerDied","Data":"1527ec91ea674eff5ff80556700585faa0273171d38d35b5a317e4f94cfb868c"} Feb 23 13:29:26.055226 master-0 kubenswrapper[26474]: I0223 13:29:26.055190 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:29:26.134350 master-0 kubenswrapper[26474]: I0223 13:29:26.134271 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqdll\" (UniqueName: \"kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134566 master-0 kubenswrapper[26474]: I0223 13:29:26.134384 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134566 master-0 kubenswrapper[26474]: I0223 13:29:26.134531 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134652 master-0 kubenswrapper[26474]: I0223 13:29:26.134609 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134684 master-0 kubenswrapper[26474]: I0223 13:29:26.134653 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134798 master-0 kubenswrapper[26474]: I0223 13:29:26.134784 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.134845 master-0 kubenswrapper[26474]: I0223 13:29:26.134811 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs\") pod \"9725d464-c206-407c-9b4c-983607fe63d1\" (UID: \"9725d464-c206-407c-9b4c-983607fe63d1\") " Feb 23 13:29:26.142109 master-0 kubenswrapper[26474]: I0223 13:29:26.137263 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs" (OuterVolumeSpecName: "logs") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:26.149944 master-0 kubenswrapper[26474]: I0223 13:29:26.149865 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts" (OuterVolumeSpecName: "scripts") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:26.150095 master-0 kubenswrapper[26474]: I0223 13:29:26.150009 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-083a9-backup-0" Feb 23 13:29:26.192797 master-0 kubenswrapper[26474]: I0223 13:29:26.169241 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll" (OuterVolumeSpecName: "kube-api-access-gqdll") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "kube-api-access-gqdll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:26.240270 master-0 kubenswrapper[26474]: I0223 13:29:26.240217 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqdll\" (UniqueName: \"kubernetes.io/projected/9725d464-c206-407c-9b4c-983607fe63d1-kube-api-access-gqdll\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.240270 master-0 kubenswrapper[26474]: I0223 13:29:26.240257 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9725d464-c206-407c-9b4c-983607fe63d1-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.240270 master-0 kubenswrapper[26474]: I0223 13:29:26.240268 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.442616 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: E0223 13:29:26.443218 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-api" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443236 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-api" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: E0223 13:29:26.443294 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-log" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443304 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-log" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: E0223 13:29:26.443328 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97872e6-b11d-4f3c-b9b9-65814a655637" containerName="mariadb-account-create-update" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443363 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97872e6-b11d-4f3c-b9b9-65814a655637" containerName="mariadb-account-create-update" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443662 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97872e6-b11d-4f3c-b9b9-65814a655637" containerName="mariadb-account-create-update" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443691 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-api" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.443714 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="9725d464-c206-407c-9b4c-983607fe63d1" containerName="placement-log" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.444613 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.454590 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 13:29:26.459358 master-0 kubenswrapper[26474]: I0223 13:29:26.454740 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 13:29:26.469907 master-0 kubenswrapper[26474]: I0223 13:29:26.465760 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 13:29:26.520858 master-0 kubenswrapper[26474]: I0223 13:29:26.520710 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data" (OuterVolumeSpecName: "config-data") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:26.549756 master-0 kubenswrapper[26474]: I0223 13:29:26.549654 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.549756 master-0 kubenswrapper[26474]: I0223 13:29:26.549759 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.549994 master-0 kubenswrapper[26474]: I0223 13:29:26.549785 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bn4\" (UniqueName: \"kubernetes.io/projected/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-kube-api-access-k8bn4\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.549994 master-0 kubenswrapper[26474]: I0223 13:29:26.549820 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.549994 master-0 kubenswrapper[26474]: I0223 13:29:26.549883 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.571841 master-0 kubenswrapper[26474]: I0223 13:29:26.568783 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-b49747f5d-8c8s8"] Feb 23 13:29:26.571841 master-0 kubenswrapper[26474]: I0223 13:29:26.570910 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.575404 master-0 kubenswrapper[26474]: I0223 13:29:26.573055 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 13:29:26.575404 master-0 kubenswrapper[26474]: I0223 13:29:26.573698 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 13:29:26.575404 master-0 kubenswrapper[26474]: I0223 13:29:26.573998 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 13:29:26.592706 master-0 kubenswrapper[26474]: I0223 13:29:26.587634 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b49747f5d-8c8s8"] Feb 23 13:29:26.638740 master-0 kubenswrapper[26474]: I0223 13:29:26.638643 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:26.652359 master-0 kubenswrapper[26474]: I0223 13:29:26.652238 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.652359 master-0 kubenswrapper[26474]: I0223 13:29:26.652309 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bn4\" (UniqueName: \"kubernetes.io/projected/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-kube-api-access-k8bn4\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.652359 master-0 kubenswrapper[26474]: I0223 13:29:26.652389 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652435 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-config-data\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652477 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-run-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652503 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-log-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652558 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-kube-api-access-l5n4b\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652644 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-public-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652742 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-internal-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652783 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652811 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-etc-swift\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.652836 master-0 kubenswrapper[26474]: I0223 13:29:26.652828 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-combined-ca-bundle\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.653270 master-0 kubenswrapper[26474]: I0223 13:29:26.652906 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.653820 master-0 kubenswrapper[26474]: I0223 13:29:26.653776 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.658741 master-0 kubenswrapper[26474]: I0223 13:29:26.658667 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.661033 master-0 kubenswrapper[26474]: I0223 13:29:26.660954 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.679024 master-0 kubenswrapper[26474]: I0223 13:29:26.676723 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bn4\" (UniqueName: \"kubernetes.io/projected/7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83-kube-api-access-k8bn4\") pod \"openstackclient\" (UID: \"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83\") " pod="openstack/openstackclient" Feb 23 13:29:26.692937 master-0 kubenswrapper[26474]: I0223 13:29:26.692843 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:26.721768 master-0 kubenswrapper[26474]: I0223 13:29:26.721680 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9725d464-c206-407c-9b4c-983607fe63d1" (UID: "9725d464-c206-407c-9b4c-983607fe63d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.754813 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-run-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.754873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-log-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.754920 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-kube-api-access-l5n4b\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.754987 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-public-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755056 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-internal-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755110 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-etc-swift\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755132 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-combined-ca-bundle\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755221 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-config-data\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755294 26474 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.756656 master-0 kubenswrapper[26474]: I0223 13:29:26.755306 26474 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9725d464-c206-407c-9b4c-983607fe63d1-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:26.757735 master-0 kubenswrapper[26474]: I0223 13:29:26.756755 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-log-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.758830 master-0 kubenswrapper[26474]: I0223 13:29:26.758390 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f02e5309-165c-41a7-b1d8-f433e9688643-run-httpd\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.761625 master-0 kubenswrapper[26474]: I0223 13:29:26.761585 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-config-data\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.761746 master-0 kubenswrapper[26474]: I0223 13:29:26.761715 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-internal-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.762814 master-0 kubenswrapper[26474]: I0223 13:29:26.762773 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-public-tls-certs\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.766446 master-0 kubenswrapper[26474]: I0223 13:29:26.766308 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-etc-swift\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.766882 master-0 kubenswrapper[26474]: I0223 13:29:26.766859 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02e5309-165c-41a7-b1d8-f433e9688643-combined-ca-bundle\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.776737 master-0 kubenswrapper[26474]: I0223 13:29:26.776697 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/f02e5309-165c-41a7-b1d8-f433e9688643-kube-api-access-l5n4b\") pod \"swift-proxy-b49747f5d-8c8s8\" (UID: \"f02e5309-165c-41a7-b1d8-f433e9688643\") " pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:26.787442 master-0 kubenswrapper[26474]: I0223 13:29:26.787404 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 13:29:27.043572 master-0 kubenswrapper[26474]: I0223 13:29:27.041545 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:27.072225 master-0 kubenswrapper[26474]: I0223 13:29:27.070881 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"ebdf25d3a30d80819d6adf7df5a1b366d2baec073e79fc7298b2a8493c75ae66"} Feb 23 13:29:27.081986 master-0 kubenswrapper[26474]: I0223 13:29:27.081531 26474 generic.go:334] "Generic (PLEG): container finished" podID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerID="5cca6ca37c029517c5a0f7c9f7c2a63aa4bc2da139605c4157b4c01091594ea6" exitCode=0 Feb 23 13:29:27.082153 master-0 kubenswrapper[26474]: I0223 13:29:27.082081 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerDied","Data":"5cca6ca37c029517c5a0f7c9f7c2a63aa4bc2da139605c4157b4c01091594ea6"} Feb 23 13:29:27.096776 master-0 kubenswrapper[26474]: I0223 13:29:27.094835 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66dfd5f7c4-jsbfx" event={"ID":"9725d464-c206-407c-9b4c-983607fe63d1","Type":"ContainerDied","Data":"aa835b54f22612797dceb493f1dfe6b28f622cf1eec9f0680a1e0e576f3cdefa"} Feb 23 13:29:27.096776 master-0 kubenswrapper[26474]: I0223 13:29:27.094908 26474 scope.go:117] "RemoveContainer" containerID="1527ec91ea674eff5ff80556700585faa0273171d38d35b5a317e4f94cfb868c" Feb 23 13:29:27.096776 master-0 kubenswrapper[26474]: I0223 13:29:27.094941 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66dfd5f7c4-jsbfx" Feb 23 13:29:27.100078 master-0 kubenswrapper[26474]: I0223 13:29:27.099948 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerStarted","Data":"88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0"} Feb 23 13:29:27.100217 master-0 kubenswrapper[26474]: I0223 13:29:27.100183 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:27.200353 master-0 kubenswrapper[26474]: I0223 13:29:27.197349 26474 scope.go:117] "RemoveContainer" containerID="f6d72a61e386612847d6ee612fc2a5240c7e5b0617d2e237633addef58284877" Feb 23 13:29:27.236919 master-0 kubenswrapper[26474]: I0223 13:29:27.236060 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" podStartSLOduration=4.144748153 podStartE2EDuration="8.236037703s" podCreationTimestamp="2026-02-23 13:29:19 +0000 UTC" firstStartedPulling="2026-02-23 13:29:21.475860344 +0000 UTC m=+883.322368021" lastFinishedPulling="2026-02-23 13:29:25.567149894 +0000 UTC m=+887.413657571" observedRunningTime="2026-02-23 13:29:27.22280829 +0000 UTC m=+889.069315967" watchObservedRunningTime="2026-02-23 13:29:27.236037703 +0000 UTC m=+889.082545380" Feb 23 13:29:27.271843 master-0 kubenswrapper[26474]: I0223 13:29:27.270413 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:29:27.286543 master-0 kubenswrapper[26474]: I0223 13:29:27.275711 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-66dfd5f7c4-jsbfx"] Feb 23 13:29:27.323188 master-0 kubenswrapper[26474]: I0223 13:29:27.323121 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 13:29:27.578912 master-0 kubenswrapper[26474]: I0223 13:29:27.578587 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b49747f5d-8c8s8"] Feb 23 13:29:27.586941 master-0 kubenswrapper[26474]: W0223 13:29:27.586891 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf02e5309_165c_41a7_b1d8_f433e9688643.slice/crio-0414c3f4b05aff251553910ef360ddcae4cfd8efd02851ffb8d4c1b56703045d WatchSource:0}: Error finding container 0414c3f4b05aff251553910ef360ddcae4cfd8efd02851ffb8d4c1b56703045d: Status 404 returned error can't find the container with id 0414c3f4b05aff251553910ef360ddcae4cfd8efd02851ffb8d4c1b56703045d Feb 23 13:29:28.125050 master-0 kubenswrapper[26474]: I0223 13:29:28.124972 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83","Type":"ContainerStarted","Data":"5087b203d2232b1dfc6ced89b9389f8d547362b5d632cda4dc9b0439c9a5053f"} Feb 23 13:29:28.130316 master-0 kubenswrapper[26474]: I0223 13:29:28.130263 26474 generic.go:334] "Generic (PLEG): container finished" podID="4671c55a-f3de-4d2d-8717-452ee80d2691" containerID="d3910f54fa9ba4b8c1e5ef93e17a2902f3eaf5dba7e551de28ac745529e54497" exitCode=0 Feb 23 13:29:28.130436 master-0 kubenswrapper[26474]: I0223 13:29:28.130406 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c7db585c-7gj9c" event={"ID":"4671c55a-f3de-4d2d-8717-452ee80d2691","Type":"ContainerDied","Data":"d3910f54fa9ba4b8c1e5ef93e17a2902f3eaf5dba7e551de28ac745529e54497"} Feb 23 13:29:28.137241 master-0 kubenswrapper[26474]: I0223 13:29:28.137159 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b49747f5d-8c8s8" event={"ID":"f02e5309-165c-41a7-b1d8-f433e9688643","Type":"ContainerStarted","Data":"64e7c7dcc681b4d01a6b0b6b19a439501e1e183791eb81360027c1115be109ec"} Feb 23 13:29:28.137241 master-0 kubenswrapper[26474]: I0223 13:29:28.137211 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b49747f5d-8c8s8" event={"ID":"f02e5309-165c-41a7-b1d8-f433e9688643","Type":"ContainerStarted","Data":"0414c3f4b05aff251553910ef360ddcae4cfd8efd02851ffb8d4c1b56703045d"} Feb 23 13:29:28.143427 master-0 kubenswrapper[26474]: I0223 13:29:28.142158 26474 generic.go:334] "Generic (PLEG): container finished" podID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerID="9ff64a31dc9d482a3931e000c427559aa1e737269b0307e61e25d7188c8c4abe" exitCode=1 Feb 23 13:29:28.143427 master-0 kubenswrapper[26474]: I0223 13:29:28.142408 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerDied","Data":"9ff64a31dc9d482a3931e000c427559aa1e737269b0307e61e25d7188c8c4abe"} Feb 23 13:29:28.143427 master-0 kubenswrapper[26474]: I0223 13:29:28.142496 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerStarted","Data":"8e562cad7348af436e130b5b0149a28c72818c18ff0de7dee2a789d4e472d8d7"} Feb 23 13:29:28.147057 master-0 kubenswrapper[26474]: I0223 13:29:28.146822 26474 scope.go:117] "RemoveContainer" containerID="9ff64a31dc9d482a3931e000c427559aa1e737269b0307e61e25d7188c8c4abe" Feb 23 13:29:28.410367 master-0 kubenswrapper[26474]: I0223 13:29:28.409955 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9725d464-c206-407c-9b4c-983607fe63d1" path="/var/lib/kubelet/pods/9725d464-c206-407c-9b4c-983607fe63d1/volumes" Feb 23 13:29:29.169484 master-0 kubenswrapper[26474]: I0223 13:29:29.169284 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c7db585c-7gj9c" event={"ID":"4671c55a-f3de-4d2d-8717-452ee80d2691","Type":"ContainerStarted","Data":"0f3f9b052e5ca95da9dbe7107267a82abd5b698ac312123c1de73472b2100a69"} Feb 23 13:29:29.169484 master-0 kubenswrapper[26474]: I0223 13:29:29.169392 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c7db585c-7gj9c" event={"ID":"4671c55a-f3de-4d2d-8717-452ee80d2691","Type":"ContainerStarted","Data":"fb084015be805141fddd951941d934827612bc621f42c8f84bea264b30630dae"} Feb 23 13:29:29.170237 master-0 kubenswrapper[26474]: I0223 13:29:29.170206 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:29.174756 master-0 kubenswrapper[26474]: I0223 13:29:29.174708 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b49747f5d-8c8s8" event={"ID":"f02e5309-165c-41a7-b1d8-f433e9688643","Type":"ContainerStarted","Data":"2e24a14291acc60e89446d9ae6355f2453e9a5a2dd5265c8d2312642e5b890ca"} Feb 23 13:29:29.175412 master-0 kubenswrapper[26474]: I0223 13:29:29.175371 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:29.175496 master-0 kubenswrapper[26474]: I0223 13:29:29.175423 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:29.179566 master-0 kubenswrapper[26474]: I0223 13:29:29.179368 26474 generic.go:334] "Generic (PLEG): container finished" podID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerID="64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047" exitCode=1 Feb 23 13:29:29.179566 master-0 kubenswrapper[26474]: I0223 13:29:29.179423 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerDied","Data":"64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047"} Feb 23 13:29:29.179566 master-0 kubenswrapper[26474]: I0223 13:29:29.179459 26474 scope.go:117] "RemoveContainer" containerID="9ff64a31dc9d482a3931e000c427559aa1e737269b0307e61e25d7188c8c4abe" Feb 23 13:29:29.180835 master-0 kubenswrapper[26474]: I0223 13:29:29.180692 26474 scope.go:117] "RemoveContainer" containerID="64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047" Feb 23 13:29:29.181196 master-0 kubenswrapper[26474]: E0223 13:29:29.181155 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf8cbb6b7-ll62g_openstack(d0c6e3b6-c201-44f9-9100-819b15b552f4)\"" pod="openstack/ironic-6cf8cbb6b7-ll62g" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" Feb 23 13:29:29.231256 master-0 kubenswrapper[26474]: I0223 13:29:29.230468 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-c7db585c-7gj9c" podStartSLOduration=5.637611093 podStartE2EDuration="6.230449118s" podCreationTimestamp="2026-02-23 13:29:23 +0000 UTC" firstStartedPulling="2026-02-23 13:29:25.602742751 +0000 UTC m=+887.449250438" lastFinishedPulling="2026-02-23 13:29:26.195580776 +0000 UTC m=+888.042088463" observedRunningTime="2026-02-23 13:29:29.202912597 +0000 UTC m=+891.049420284" watchObservedRunningTime="2026-02-23 13:29:29.230449118 +0000 UTC m=+891.076956795" Feb 23 13:29:29.272323 master-0 kubenswrapper[26474]: I0223 13:29:29.269765 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-b49747f5d-8c8s8" podStartSLOduration=3.269740926 podStartE2EDuration="3.269740926s" podCreationTimestamp="2026-02-23 13:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:29.247440602 +0000 UTC m=+891.093948309" watchObservedRunningTime="2026-02-23 13:29:29.269740926 +0000 UTC m=+891.116248603" Feb 23 13:29:29.948580 master-0 kubenswrapper[26474]: I0223 13:29:29.948512 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:29:30.002153 master-0 kubenswrapper[26474]: E0223 13:29:30.001824 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.002153 master-0 kubenswrapper[26474]: E0223 13:29:30.001946 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.002413 master-0 kubenswrapper[26474]: E0223 13:29:30.002162 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.003357 master-0 kubenswrapper[26474]: E0223 13:29:30.002959 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.003357 master-0 kubenswrapper[26474]: E0223 13:29:30.003067 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.003357 master-0 kubenswrapper[26474]: E0223 13:29:30.003105 26474 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" podUID="7af5b404-f4af-4e67-b355-916c6240db47" containerName="ironic-neutron-agent" Feb 23 13:29:30.003357 master-0 kubenswrapper[26474]: E0223 13:29:30.003319 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" cmd=["/bin/true"] Feb 23 13:29:30.003521 master-0 kubenswrapper[26474]: E0223 13:29:30.003364 26474 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" podUID="7af5b404-f4af-4e67-b355-916c6240db47" containerName="ironic-neutron-agent" Feb 23 13:29:30.082494 master-0 kubenswrapper[26474]: I0223 13:29:30.079810 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:30.082494 master-0 kubenswrapper[26474]: I0223 13:29:30.080189 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="dnsmasq-dns" containerID="cri-o://3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0" gracePeriod=10 Feb 23 13:29:30.200138 master-0 kubenswrapper[26474]: I0223 13:29:30.200068 26474 scope.go:117] "RemoveContainer" containerID="64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047" Feb 23 13:29:30.200596 master-0 kubenswrapper[26474]: E0223 13:29:30.200471 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf8cbb6b7-ll62g_openstack(d0c6e3b6-c201-44f9-9100-819b15b552f4)\"" pod="openstack/ironic-6cf8cbb6b7-ll62g" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" Feb 23 13:29:30.200681 master-0 kubenswrapper[26474]: I0223 13:29:30.200626 26474 generic.go:334] "Generic (PLEG): container finished" podID="7af5b404-f4af-4e67-b355-916c6240db47" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" exitCode=1 Feb 23 13:29:30.200732 master-0 kubenswrapper[26474]: I0223 13:29:30.200707 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerDied","Data":"88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0"} Feb 23 13:29:30.201704 master-0 kubenswrapper[26474]: I0223 13:29:30.201676 26474 scope.go:117] "RemoveContainer" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" Feb 23 13:29:30.214914 master-0 kubenswrapper[26474]: I0223 13:29:30.214824 26474 generic.go:334] "Generic (PLEG): container finished" podID="808fa98d-dace-4799-9059-a26510355d62" containerID="ebdf25d3a30d80819d6adf7df5a1b366d2baec073e79fc7298b2a8493c75ae66" exitCode=0 Feb 23 13:29:30.216459 master-0 kubenswrapper[26474]: I0223 13:29:30.216413 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerDied","Data":"ebdf25d3a30d80819d6adf7df5a1b366d2baec073e79fc7298b2a8493c75ae66"} Feb 23 13:29:30.374416 master-0 kubenswrapper[26474]: I0223 13:29:30.374351 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:30.374698 master-0 kubenswrapper[26474]: I0223 13:29:30.374435 26474 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:30.767283 master-0 kubenswrapper[26474]: I0223 13:29:30.767222 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844312 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844442 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844477 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844598 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z29w5\" (UniqueName: \"kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844793 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.845096 master-0 kubenswrapper[26474]: I0223 13:29:30.844901 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb\") pod \"b203cc4c-b77a-4c7e-9303-d980d46b630b\" (UID: \"b203cc4c-b77a-4c7e-9303-d980d46b630b\") " Feb 23 13:29:30.865629 master-0 kubenswrapper[26474]: I0223 13:29:30.865564 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5" (OuterVolumeSpecName: "kube-api-access-z29w5") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "kube-api-access-z29w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:30.926257 master-0 kubenswrapper[26474]: I0223 13:29:30.925731 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:30.931714 master-0 kubenswrapper[26474]: I0223 13:29:30.929779 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:30.944199 master-0 kubenswrapper[26474]: I0223 13:29:30.944124 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config" (OuterVolumeSpecName: "config") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:30.954674 master-0 kubenswrapper[26474]: I0223 13:29:30.954627 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:30.954674 master-0 kubenswrapper[26474]: I0223 13:29:30.954670 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:30.954674 master-0 kubenswrapper[26474]: I0223 13:29:30.954684 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z29w5\" (UniqueName: \"kubernetes.io/projected/b203cc4c-b77a-4c7e-9303-d980d46b630b-kube-api-access-z29w5\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:30.954919 master-0 kubenswrapper[26474]: I0223 13:29:30.954693 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:30.961590 master-0 kubenswrapper[26474]: I0223 13:29:30.961519 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:30.968354 master-0 kubenswrapper[26474]: I0223 13:29:30.968265 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b203cc4c-b77a-4c7e-9303-d980d46b630b" (UID: "b203cc4c-b77a-4c7e-9303-d980d46b630b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:31.057813 master-0 kubenswrapper[26474]: I0223 13:29:31.057532 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:31.057813 master-0 kubenswrapper[26474]: I0223 13:29:31.057589 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b203cc4c-b77a-4c7e-9303-d980d46b630b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:31.242531 master-0 kubenswrapper[26474]: I0223 13:29:31.242360 26474 generic.go:334] "Generic (PLEG): container finished" podID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerID="3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0" exitCode=0 Feb 23 13:29:31.243685 master-0 kubenswrapper[26474]: I0223 13:29:31.242536 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" event={"ID":"b203cc4c-b77a-4c7e-9303-d980d46b630b","Type":"ContainerDied","Data":"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0"} Feb 23 13:29:31.243685 master-0 kubenswrapper[26474]: I0223 13:29:31.242611 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" event={"ID":"b203cc4c-b77a-4c7e-9303-d980d46b630b","Type":"ContainerDied","Data":"4c9c9a1b7b87c134b2a4fb7388ecd6cfd6c1d32c02ce05f5eb7f0939799dbb40"} Feb 23 13:29:31.243685 master-0 kubenswrapper[26474]: I0223 13:29:31.242664 26474 scope.go:117] "RemoveContainer" containerID="3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0" Feb 23 13:29:31.243685 master-0 kubenswrapper[26474]: I0223 13:29:31.242856 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5cdb5b55-g9brn" Feb 23 13:29:31.256026 master-0 kubenswrapper[26474]: I0223 13:29:31.255947 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerStarted","Data":"751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7"} Feb 23 13:29:31.256279 master-0 kubenswrapper[26474]: I0223 13:29:31.256185 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:31.259064 master-0 kubenswrapper[26474]: I0223 13:29:31.259036 26474 scope.go:117] "RemoveContainer" containerID="64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047" Feb 23 13:29:31.259481 master-0 kubenswrapper[26474]: E0223 13:29:31.259457 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf8cbb6b7-ll62g_openstack(d0c6e3b6-c201-44f9-9100-819b15b552f4)\"" pod="openstack/ironic-6cf8cbb6b7-ll62g" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" Feb 23 13:29:31.324838 master-0 kubenswrapper[26474]: I0223 13:29:31.324776 26474 scope.go:117] "RemoveContainer" containerID="e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f" Feb 23 13:29:31.329543 master-0 kubenswrapper[26474]: I0223 13:29:31.329479 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:31.349983 master-0 kubenswrapper[26474]: I0223 13:29:31.349917 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f5cdb5b55-g9brn"] Feb 23 13:29:31.364650 master-0 kubenswrapper[26474]: I0223 13:29:31.364011 26474 scope.go:117] "RemoveContainer" containerID="3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0" Feb 23 13:29:31.364650 master-0 kubenswrapper[26474]: E0223 13:29:31.364544 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0\": container with ID starting with 3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0 not found: ID does not exist" containerID="3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0" Feb 23 13:29:31.364650 master-0 kubenswrapper[26474]: I0223 13:29:31.364577 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0"} err="failed to get container status \"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0\": rpc error: code = NotFound desc = could not find container \"3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0\": container with ID starting with 3ea19fb32fec9f86ae41742f9b882fad65106ac6415b20b45e41a5846754c5a0 not found: ID does not exist" Feb 23 13:29:31.364650 master-0 kubenswrapper[26474]: I0223 13:29:31.364599 26474 scope.go:117] "RemoveContainer" containerID="e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f" Feb 23 13:29:31.365030 master-0 kubenswrapper[26474]: E0223 13:29:31.364946 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f\": container with ID starting with e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f not found: ID does not exist" containerID="e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f" Feb 23 13:29:31.365030 master-0 kubenswrapper[26474]: I0223 13:29:31.364970 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f"} err="failed to get container status \"e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f\": rpc error: code = NotFound desc = could not find container \"e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f\": container with ID starting with e25b264c88e3b19b8d7f68438fcb39772efb5ce54ca630f1f9d1f169361bc57f not found: ID does not exist" Feb 23 13:29:31.750373 master-0 kubenswrapper[26474]: I0223 13:29:31.750267 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:32.080289 master-0 kubenswrapper[26474]: I0223 13:29:32.080217 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:32.418206 master-0 kubenswrapper[26474]: I0223 13:29:32.418135 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" path="/var/lib/kubelet/pods/b203cc4c-b77a-4c7e-9303-d980d46b630b/volumes" Feb 23 13:29:34.103843 master-0 kubenswrapper[26474]: I0223 13:29:34.103734 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-mrlgq"] Feb 23 13:29:34.105003 master-0 kubenswrapper[26474]: E0223 13:29:34.104965 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="init" Feb 23 13:29:34.105075 master-0 kubenswrapper[26474]: I0223 13:29:34.104999 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="init" Feb 23 13:29:34.105075 master-0 kubenswrapper[26474]: E0223 13:29:34.105053 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="dnsmasq-dns" Feb 23 13:29:34.105075 master-0 kubenswrapper[26474]: I0223 13:29:34.105063 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="dnsmasq-dns" Feb 23 13:29:34.105712 master-0 kubenswrapper[26474]: I0223 13:29:34.105681 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="b203cc4c-b77a-4c7e-9303-d980d46b630b" containerName="dnsmasq-dns" Feb 23 13:29:34.107521 master-0 kubenswrapper[26474]: I0223 13:29:34.107478 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.117780 master-0 kubenswrapper[26474]: I0223 13:29:34.109706 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 23 13:29:34.117780 master-0 kubenswrapper[26474]: I0223 13:29:34.110099 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 23 13:29:34.117780 master-0 kubenswrapper[26474]: I0223 13:29:34.117333 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-mrlgq"] Feb 23 13:29:34.297101 master-0 kubenswrapper[26474]: I0223 13:29:34.297012 26474 generic.go:334] "Generic (PLEG): container finished" podID="7af5b404-f4af-4e67-b355-916c6240db47" containerID="751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7" exitCode=1 Feb 23 13:29:34.297419 master-0 kubenswrapper[26474]: I0223 13:29:34.297121 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerDied","Data":"751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7"} Feb 23 13:29:34.297419 master-0 kubenswrapper[26474]: I0223 13:29:34.297323 26474 scope.go:117] "RemoveContainer" containerID="88389880926a55f40bc165e487225de8614005017474eade71ce822bb64a46d0" Feb 23 13:29:34.298848 master-0 kubenswrapper[26474]: I0223 13:29:34.298809 26474 scope.go:117] "RemoveContainer" containerID="751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7" Feb 23 13:29:34.299386 master-0 kubenswrapper[26474]: E0223 13:29:34.299327 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7d6b446974-djn5h_openstack(7af5b404-f4af-4e67-b355-916c6240db47)\"" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" podUID="7af5b404-f4af-4e67-b355-916c6240db47" Feb 23 13:29:34.299448 master-0 kubenswrapper[26474]: I0223 13:29:34.299361 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299486 master-0 kubenswrapper[26474]: I0223 13:29:34.299463 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299525 master-0 kubenswrapper[26474]: I0223 13:29:34.299513 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299631 master-0 kubenswrapper[26474]: I0223 13:29:34.299602 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299676 master-0 kubenswrapper[26474]: I0223 13:29:34.299636 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299676 master-0 kubenswrapper[26474]: I0223 13:29:34.299653 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4rn\" (UniqueName: \"kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.299740 master-0 kubenswrapper[26474]: I0223 13:29:34.299684 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403401 master-0 kubenswrapper[26474]: I0223 13:29:34.403195 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403401 master-0 kubenswrapper[26474]: I0223 13:29:34.403262 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403401 master-0 kubenswrapper[26474]: I0223 13:29:34.403288 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4rn\" (UniqueName: \"kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403401 master-0 kubenswrapper[26474]: I0223 13:29:34.403330 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403709 master-0 kubenswrapper[26474]: I0223 13:29:34.403517 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403709 master-0 kubenswrapper[26474]: I0223 13:29:34.403552 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.403709 master-0 kubenswrapper[26474]: I0223 13:29:34.403595 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.407329 master-0 kubenswrapper[26474]: I0223 13:29:34.407256 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.412988 master-0 kubenswrapper[26474]: I0223 13:29:34.412771 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.413274 master-0 kubenswrapper[26474]: I0223 13:29:34.413237 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.417493 master-0 kubenswrapper[26474]: I0223 13:29:34.416260 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.417493 master-0 kubenswrapper[26474]: I0223 13:29:34.416778 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.431550 master-0 kubenswrapper[26474]: I0223 13:29:34.431008 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4rn\" (UniqueName: \"kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.435418 master-0 kubenswrapper[26474]: I0223 13:29:34.433391 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config\") pod \"ironic-inspector-db-sync-mrlgq\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:34.438846 master-0 kubenswrapper[26474]: I0223 13:29:34.438784 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:35.002042 master-0 kubenswrapper[26474]: I0223 13:29:35.001975 26474 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:35.125284 master-0 kubenswrapper[26474]: I0223 13:29:35.125223 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:35.125927 master-0 kubenswrapper[26474]: I0223 13:29:35.125536 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4fec4-default-external-api-0" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-log" containerID="cri-o://5d462ace45b7339e633e21bbec2030637d88bd78f71b07fe7e58244a384422c7" gracePeriod=30 Feb 23 13:29:35.125927 master-0 kubenswrapper[26474]: I0223 13:29:35.125683 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4fec4-default-external-api-0" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-httpd" containerID="cri-o://0f29bec70791b04a1f83e8f9002bf09630c2b9c91ee0fb472b6424c7b608ae1b" gracePeriod=30 Feb 23 13:29:35.317445 master-0 kubenswrapper[26474]: I0223 13:29:35.317387 26474 generic.go:334] "Generic (PLEG): container finished" podID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerID="5d462ace45b7339e633e21bbec2030637d88bd78f71b07fe7e58244a384422c7" exitCode=143 Feb 23 13:29:35.317703 master-0 kubenswrapper[26474]: I0223 13:29:35.317587 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerDied","Data":"5d462ace45b7339e633e21bbec2030637d88bd78f71b07fe7e58244a384422c7"} Feb 23 13:29:35.318375 master-0 kubenswrapper[26474]: I0223 13:29:35.318325 26474 scope.go:117] "RemoveContainer" containerID="751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7" Feb 23 13:29:35.318728 master-0 kubenswrapper[26474]: E0223 13:29:35.318693 26474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7d6b446974-djn5h_openstack(7af5b404-f4af-4e67-b355-916c6240db47)\"" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" podUID="7af5b404-f4af-4e67-b355-916c6240db47" Feb 23 13:29:35.836296 master-0 kubenswrapper[26474]: I0223 13:29:35.836141 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-c7db585c-7gj9c" Feb 23 13:29:36.011117 master-0 kubenswrapper[26474]: I0223 13:29:36.011041 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:36.012464 master-0 kubenswrapper[26474]: I0223 13:29:36.011579 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4fec4-default-internal-api-0" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-log" containerID="cri-o://587b8669d3eca70a07d5c825bbfd0552cbca861825e7d9f54124c8f4251e8d19" gracePeriod=30 Feb 23 13:29:36.013104 master-0 kubenswrapper[26474]: I0223 13:29:36.013007 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4fec4-default-internal-api-0" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-httpd" containerID="cri-o://45fb6aa6006145e1f9daf21c319aa115ba9b32e777e96c9dbfcb7c0fc0c9514b" gracePeriod=30 Feb 23 13:29:36.048521 master-0 kubenswrapper[26474]: I0223 13:29:36.048429 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:36.048779 master-0 kubenswrapper[26474]: I0223 13:29:36.048743 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-6cf8cbb6b7-ll62g" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api-log" containerID="cri-o://8e562cad7348af436e130b5b0149a28c72818c18ff0de7dee2a789d4e472d8d7" gracePeriod=60 Feb 23 13:29:36.084129 master-0 kubenswrapper[26474]: I0223 13:29:36.082499 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-whpzb"] Feb 23 13:29:36.089644 master-0 kubenswrapper[26474]: I0223 13:29:36.085451 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.113432 master-0 kubenswrapper[26474]: I0223 13:29:36.106595 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-whpzb"] Feb 23 13:29:36.155650 master-0 kubenswrapper[26474]: I0223 13:29:36.155252 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qw4db"] Feb 23 13:29:36.158481 master-0 kubenswrapper[26474]: I0223 13:29:36.156892 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.162237 master-0 kubenswrapper[26474]: I0223 13:29:36.162069 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qw4db"] Feb 23 13:29:36.227364 master-0 kubenswrapper[26474]: I0223 13:29:36.210748 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k8d\" (UniqueName: \"kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.227364 master-0 kubenswrapper[26474]: I0223 13:29:36.210907 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.251940 master-0 kubenswrapper[26474]: I0223 13:29:36.251229 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-46be-account-create-update-v6qrd"] Feb 23 13:29:36.253979 master-0 kubenswrapper[26474]: I0223 13:29:36.253953 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.256878 master-0 kubenswrapper[26474]: I0223 13:29:36.256837 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 13:29:36.261298 master-0 kubenswrapper[26474]: I0223 13:29:36.261251 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-46be-account-create-update-v6qrd"] Feb 23 13:29:36.321372 master-0 kubenswrapper[26474]: I0223 13:29:36.318010 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.321372 master-0 kubenswrapper[26474]: I0223 13:29:36.318091 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbp4v\" (UniqueName: \"kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.321372 master-0 kubenswrapper[26474]: I0223 13:29:36.318121 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k8d\" (UniqueName: \"kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.321372 master-0 kubenswrapper[26474]: I0223 13:29:36.318199 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.338488 master-0 kubenswrapper[26474]: I0223 13:29:36.326285 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.338488 master-0 kubenswrapper[26474]: I0223 13:29:36.337445 26474 generic.go:334] "Generic (PLEG): container finished" podID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerID="8e562cad7348af436e130b5b0149a28c72818c18ff0de7dee2a789d4e472d8d7" exitCode=143 Feb 23 13:29:36.338488 master-0 kubenswrapper[26474]: I0223 13:29:36.337520 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerDied","Data":"8e562cad7348af436e130b5b0149a28c72818c18ff0de7dee2a789d4e472d8d7"} Feb 23 13:29:36.343484 master-0 kubenswrapper[26474]: I0223 13:29:36.342605 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k8d\" (UniqueName: \"kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d\") pod \"nova-api-db-create-whpzb\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.343484 master-0 kubenswrapper[26474]: I0223 13:29:36.343123 26474 generic.go:334] "Generic (PLEG): container finished" podID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerID="587b8669d3eca70a07d5c825bbfd0552cbca861825e7d9f54124c8f4251e8d19" exitCode=143 Feb 23 13:29:36.343484 master-0 kubenswrapper[26474]: I0223 13:29:36.343161 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerDied","Data":"587b8669d3eca70a07d5c825bbfd0552cbca861825e7d9f54124c8f4251e8d19"} Feb 23 13:29:36.350721 master-0 kubenswrapper[26474]: I0223 13:29:36.350667 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bqkcr"] Feb 23 13:29:36.390934 master-0 kubenswrapper[26474]: I0223 13:29:36.390855 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.424399 master-0 kubenswrapper[26474]: I0223 13:29:36.424310 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.424662 master-0 kubenswrapper[26474]: I0223 13:29:36.424492 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbp4v\" (UniqueName: \"kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.424725 master-0 kubenswrapper[26474]: I0223 13:29:36.424682 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.424974 master-0 kubenswrapper[26474]: I0223 13:29:36.424902 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcm8m\" (UniqueName: \"kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.465240 master-0 kubenswrapper[26474]: I0223 13:29:36.459471 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbp4v\" (UniqueName: \"kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.465240 master-0 kubenswrapper[26474]: I0223 13:29:36.462162 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts\") pod \"nova-cell0-db-create-qw4db\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.465240 master-0 kubenswrapper[26474]: I0223 13:29:36.463364 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bqkcr"] Feb 23 13:29:36.483946 master-0 kubenswrapper[26474]: I0223 13:29:36.483885 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7274-account-create-update-2r52m"] Feb 23 13:29:36.488577 master-0 kubenswrapper[26474]: I0223 13:29:36.487742 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.489929 master-0 kubenswrapper[26474]: I0223 13:29:36.489873 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:36.490976 master-0 kubenswrapper[26474]: I0223 13:29:36.490934 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 13:29:36.499417 master-0 kubenswrapper[26474]: I0223 13:29:36.499313 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7274-account-create-update-2r52m"] Feb 23 13:29:36.504931 master-0 kubenswrapper[26474]: I0223 13:29:36.504859 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:36.527456 master-0 kubenswrapper[26474]: I0223 13:29:36.527378 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.527671 master-0 kubenswrapper[26474]: I0223 13:29:36.527564 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcm8m\" (UniqueName: \"kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.527671 master-0 kubenswrapper[26474]: I0223 13:29:36.527612 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9w6\" (UniqueName: \"kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.527737 master-0 kubenswrapper[26474]: I0223 13:29:36.527694 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.530170 master-0 kubenswrapper[26474]: I0223 13:29:36.530113 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.547916 master-0 kubenswrapper[26474]: I0223 13:29:36.547868 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcm8m\" (UniqueName: \"kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m\") pod \"nova-api-46be-account-create-update-v6qrd\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.579783 master-0 kubenswrapper[26474]: I0223 13:29:36.575977 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:36.629443 master-0 kubenswrapper[26474]: I0223 13:29:36.629303 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.629633 master-0 kubenswrapper[26474]: I0223 13:29:36.629474 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnh98\" (UniqueName: \"kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.629633 master-0 kubenswrapper[26474]: I0223 13:29:36.629563 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9w6\" (UniqueName: \"kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.629633 master-0 kubenswrapper[26474]: I0223 13:29:36.629604 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.630609 master-0 kubenswrapper[26474]: I0223 13:29:36.630496 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.636521 master-0 kubenswrapper[26474]: I0223 13:29:36.634156 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-222a-account-create-update-6g2mv"] Feb 23 13:29:36.640144 master-0 kubenswrapper[26474]: I0223 13:29:36.640067 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.643470 master-0 kubenswrapper[26474]: I0223 13:29:36.643433 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 13:29:36.661465 master-0 kubenswrapper[26474]: I0223 13:29:36.660840 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-222a-account-create-update-6g2mv"] Feb 23 13:29:36.667858 master-0 kubenswrapper[26474]: I0223 13:29:36.667799 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9w6\" (UniqueName: \"kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6\") pod \"nova-cell1-db-create-bqkcr\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.731855 master-0 kubenswrapper[26474]: I0223 13:29:36.731791 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.732104 master-0 kubenswrapper[26474]: I0223 13:29:36.731962 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnh98\" (UniqueName: \"kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.732104 master-0 kubenswrapper[26474]: I0223 13:29:36.732095 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhd6\" (UniqueName: \"kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.732207 master-0 kubenswrapper[26474]: I0223 13:29:36.732170 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.733964 master-0 kubenswrapper[26474]: I0223 13:29:36.733758 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.753691 master-0 kubenswrapper[26474]: I0223 13:29:36.753624 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnh98\" (UniqueName: \"kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98\") pod \"nova-cell0-7274-account-create-update-2r52m\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.758651 master-0 kubenswrapper[26474]: I0223 13:29:36.758569 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:36.834969 master-0 kubenswrapper[26474]: I0223 13:29:36.834873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhd6\" (UniqueName: \"kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.835211 master-0 kubenswrapper[26474]: I0223 13:29:36.835100 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.836230 master-0 kubenswrapper[26474]: I0223 13:29:36.836185 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:36.836550 master-0 kubenswrapper[26474]: I0223 13:29:36.836523 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:36.852562 master-0 kubenswrapper[26474]: I0223 13:29:36.852475 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhd6\" (UniqueName: \"kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6\") pod \"nova-cell1-222a-account-create-update-6g2mv\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:37.026931 master-0 kubenswrapper[26474]: I0223 13:29:37.026865 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:37.050323 master-0 kubenswrapper[26474]: I0223 13:29:37.050262 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b49747f5d-8c8s8" Feb 23 13:29:37.721332 master-0 kubenswrapper[26474]: I0223 13:29:37.721262 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-655989fbf7-gkzkz" Feb 23 13:29:37.820970 master-0 kubenswrapper[26474]: I0223 13:29:37.820890 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:37.821927 master-0 kubenswrapper[26474]: I0223 13:29:37.821899 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fb588f498-9ctlt" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-httpd" containerID="cri-o://9b36dfa74135b1ee14813aa452de0d0c2e90a381d065c2a28e08d603f400241c" gracePeriod=30 Feb 23 13:29:37.822688 master-0 kubenswrapper[26474]: I0223 13:29:37.821826 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fb588f498-9ctlt" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-api" containerID="cri-o://b041adfb5d532197cf22656a64d17c598f1422d06dc8e730c00ec6f7bc0c34b8" gracePeriod=30 Feb 23 13:29:38.409436 master-0 kubenswrapper[26474]: I0223 13:29:38.408797 26474 generic.go:334] "Generic (PLEG): container finished" podID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerID="9b36dfa74135b1ee14813aa452de0d0c2e90a381d065c2a28e08d603f400241c" exitCode=0 Feb 23 13:29:38.414385 master-0 kubenswrapper[26474]: I0223 13:29:38.414331 26474 generic.go:334] "Generic (PLEG): container finished" podID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerID="0f29bec70791b04a1f83e8f9002bf09630c2b9c91ee0fb472b6424c7b608ae1b" exitCode=0 Feb 23 13:29:38.433943 master-0 kubenswrapper[26474]: I0223 13:29:38.433845 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerDied","Data":"9b36dfa74135b1ee14813aa452de0d0c2e90a381d065c2a28e08d603f400241c"} Feb 23 13:29:38.433943 master-0 kubenswrapper[26474]: I0223 13:29:38.433945 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerDied","Data":"0f29bec70791b04a1f83e8f9002bf09630c2b9c91ee0fb472b6424c7b608ae1b"} Feb 23 13:29:39.434135 master-0 kubenswrapper[26474]: I0223 13:29:39.433912 26474 generic.go:334] "Generic (PLEG): container finished" podID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerID="b041adfb5d532197cf22656a64d17c598f1422d06dc8e730c00ec6f7bc0c34b8" exitCode=0 Feb 23 13:29:39.434135 master-0 kubenswrapper[26474]: I0223 13:29:39.434075 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerDied","Data":"b041adfb5d532197cf22656a64d17c598f1422d06dc8e730c00ec6f7bc0c34b8"} Feb 23 13:29:40.457541 master-0 kubenswrapper[26474]: I0223 13:29:40.457387 26474 generic.go:334] "Generic (PLEG): container finished" podID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerID="45fb6aa6006145e1f9daf21c319aa115ba9b32e777e96c9dbfcb7c0fc0c9514b" exitCode=0 Feb 23 13:29:40.457541 master-0 kubenswrapper[26474]: I0223 13:29:40.457468 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerDied","Data":"45fb6aa6006145e1f9daf21c319aa115ba9b32e777e96c9dbfcb7c0fc0c9514b"} Feb 23 13:29:41.502873 master-0 kubenswrapper[26474]: I0223 13:29:41.502703 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:41.513120 master-0 kubenswrapper[26474]: I0223 13:29:41.513070 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf8cbb6b7-ll62g" event={"ID":"d0c6e3b6-c201-44f9-9100-819b15b552f4","Type":"ContainerDied","Data":"c7ecd6fc9cdb0f5c62f1170597ba70d4c7be68eeb98f7776f19c8736c3d8953d"} Feb 23 13:29:41.513461 master-0 kubenswrapper[26474]: I0223 13:29:41.513444 26474 scope.go:117] "RemoveContainer" containerID="64c59a82c5cc79b329479a536d16b8e0f42eb6e4b2ae416ee3f6863a3c23d047" Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622236 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622346 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622467 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622583 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622620 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622646 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622669 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjzl\" (UniqueName: \"kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622721 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged\") pod \"d0c6e3b6-c201-44f9-9100-819b15b552f4\" (UID: \"d0c6e3b6-c201-44f9-9100-819b15b552f4\") " Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.622718 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs" (OuterVolumeSpecName: "logs") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.623443 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.624000 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:41.627190 master-0 kubenswrapper[26474]: I0223 13:29:41.626424 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:29:41.630169 master-0 kubenswrapper[26474]: I0223 13:29:41.629995 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts" (OuterVolumeSpecName: "scripts") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41.643087 master-0 kubenswrapper[26474]: I0223 13:29:41.636428 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl" (OuterVolumeSpecName: "kube-api-access-wvjzl") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "kube-api-access-wvjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:41.643087 master-0 kubenswrapper[26474]: I0223 13:29:41.636663 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41.648057 master-0 kubenswrapper[26474]: I0223 13:29:41.648009 26474 scope.go:117] "RemoveContainer" containerID="8e562cad7348af436e130b5b0149a28c72818c18ff0de7dee2a789d4e472d8d7" Feb 23 13:29:41.669598 master-0 kubenswrapper[26474]: I0223 13:29:41.669547 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data" (OuterVolumeSpecName: "config-data") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41.715460 master-0 kubenswrapper[26474]: I0223 13:29:41.715269 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0c6e3b6-c201-44f9-9100-819b15b552f4" (UID: "d0c6e3b6-c201-44f9-9100-819b15b552f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732105 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732155 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732166 26474 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d0c6e3b6-c201-44f9-9100-819b15b552f4-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732176 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvjzl\" (UniqueName: \"kubernetes.io/projected/d0c6e3b6-c201-44f9-9100-819b15b552f4-kube-api-access-wvjzl\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732189 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732197 26474 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.741390 master-0 kubenswrapper[26474]: I0223 13:29:41.732205 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c6e3b6-c201-44f9-9100-819b15b552f4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:41.867505 master-0 kubenswrapper[26474]: I0223 13:29:41.857723 26474 scope.go:117] "RemoveContainer" containerID="5cca6ca37c029517c5a0f7c9f7c2a63aa4bc2da139605c4157b4c01091594ea6" Feb 23 13:29:42.050045 master-0 kubenswrapper[26474]: I0223 13:29:42.049924 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:42.112708 master-0 kubenswrapper[26474]: I0223 13:29:42.109941 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:42.143098 master-0 kubenswrapper[26474]: I0223 13:29:42.143046 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143364 master-0 kubenswrapper[26474]: I0223 13:29:42.143125 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143364 master-0 kubenswrapper[26474]: I0223 13:29:42.143155 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143364 master-0 kubenswrapper[26474]: I0223 13:29:42.143270 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143364 master-0 kubenswrapper[26474]: I0223 13:29:42.143286 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143364 master-0 kubenswrapper[26474]: I0223 13:29:42.143315 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143588 master-0 kubenswrapper[26474]: I0223 13:29:42.143408 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dzqh\" (UniqueName: \"kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.143639 master-0 kubenswrapper[26474]: I0223 13:29:42.143612 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data\") pod \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\" (UID: \"d5fa92de-9c73-449e-9f0d-abbd176f1eb5\") " Feb 23 13:29:42.147434 master-0 kubenswrapper[26474]: I0223 13:29:42.147323 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs" (OuterVolumeSpecName: "logs") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:42.147434 master-0 kubenswrapper[26474]: I0223 13:29:42.147378 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:42.152818 master-0 kubenswrapper[26474]: I0223 13:29:42.152741 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh" (OuterVolumeSpecName: "kube-api-access-6dzqh") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "kube-api-access-6dzqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:42.153237 master-0 kubenswrapper[26474]: I0223 13:29:42.153182 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts" (OuterVolumeSpecName: "scripts") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.167480 master-0 kubenswrapper[26474]: I0223 13:29:42.165436 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc" (OuterVolumeSpecName: "glance") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "pvc-944b9e95-3688-4337-a744-6330aac8e963". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 13:29:42.178406 master-0 kubenswrapper[26474]: I0223 13:29:42.177852 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.206896 master-0 kubenswrapper[26474]: I0223 13:29:42.206833 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data" (OuterVolumeSpecName: "config-data") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.209719 master-0 kubenswrapper[26474]: I0223 13:29:42.209674 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5fa92de-9c73-449e-9f0d-abbd176f1eb5" (UID: "d5fa92de-9c73-449e-9f0d-abbd176f1eb5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.247405 master-0 kubenswrapper[26474]: I0223 13:29:42.247288 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cw96\" (UniqueName: \"kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96\") pod \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " Feb 23 13:29:42.247640 master-0 kubenswrapper[26474]: I0223 13:29:42.247486 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs\") pod \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " Feb 23 13:29:42.247732 master-0 kubenswrapper[26474]: I0223 13:29:42.247703 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle\") pod \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " Feb 23 13:29:42.247950 master-0 kubenswrapper[26474]: I0223 13:29:42.247914 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config\") pod \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " Feb 23 13:29:42.248119 master-0 kubenswrapper[26474]: I0223 13:29:42.248083 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config\") pod \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\" (UID: \"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb\") " Feb 23 13:29:42.249289 master-0 kubenswrapper[26474]: I0223 13:29:42.249244 26474 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") on node \"master-0\" " Feb 23 13:29:42.249289 master-0 kubenswrapper[26474]: I0223 13:29:42.249278 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249289 master-0 kubenswrapper[26474]: I0223 13:29:42.249291 26474 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249523 master-0 kubenswrapper[26474]: I0223 13:29:42.249307 26474 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249523 master-0 kubenswrapper[26474]: I0223 13:29:42.249366 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249523 master-0 kubenswrapper[26474]: I0223 13:29:42.249377 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249523 master-0 kubenswrapper[26474]: I0223 13:29:42.249389 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dzqh\" (UniqueName: \"kubernetes.io/projected/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-kube-api-access-6dzqh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.249523 master-0 kubenswrapper[26474]: I0223 13:29:42.249398 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5fa92de-9c73-449e-9f0d-abbd176f1eb5-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.252072 master-0 kubenswrapper[26474]: I0223 13:29:42.252011 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96" (OuterVolumeSpecName: "kube-api-access-2cw96") pod "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" (UID: "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb"). InnerVolumeSpecName "kube-api-access-2cw96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:42.260120 master-0 kubenswrapper[26474]: I0223 13:29:42.260053 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" (UID: "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.287033 master-0 kubenswrapper[26474]: I0223 13:29:42.286986 26474 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 13:29:42.287215 master-0 kubenswrapper[26474]: I0223 13:29:42.287196 26474 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-944b9e95-3688-4337-a744-6330aac8e963" (UniqueName: "kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc") on node "master-0" Feb 23 13:29:42.330915 master-0 kubenswrapper[26474]: I0223 13:29:42.330730 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config" (OuterVolumeSpecName: "config") pod "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" (UID: "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.331610 master-0 kubenswrapper[26474]: I0223 13:29:42.331565 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" (UID: "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.351563 master-0 kubenswrapper[26474]: I0223 13:29:42.351470 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.351563 master-0 kubenswrapper[26474]: I0223 13:29:42.351547 26474 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.351563 master-0 kubenswrapper[26474]: I0223 13:29:42.351561 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cw96\" (UniqueName: \"kubernetes.io/projected/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-kube-api-access-2cw96\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.351563 master-0 kubenswrapper[26474]: I0223 13:29:42.351573 26474 reconciler_common.go:293] "Volume detached for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.351563 master-0 kubenswrapper[26474]: I0223 13:29:42.351583 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.382947 master-0 kubenswrapper[26474]: I0223 13:29:42.359516 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" (UID: "6e7cb4e9-6c67-409d-8446-cce05d6b8fbb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.454326 master-0 kubenswrapper[26474]: I0223 13:29:42.454242 26474 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.574182 master-0 kubenswrapper[26474]: I0223 13:29:42.574041 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb588f498-9ctlt" event={"ID":"6e7cb4e9-6c67-409d-8446-cce05d6b8fbb","Type":"ContainerDied","Data":"6f213a92b33522e82ac39656954e3c0560f616ceaa241006b0b6828300c55eca"} Feb 23 13:29:42.574182 master-0 kubenswrapper[26474]: I0223 13:29:42.574141 26474 scope.go:117] "RemoveContainer" containerID="9b36dfa74135b1ee14813aa452de0d0c2e90a381d065c2a28e08d603f400241c" Feb 23 13:29:42.575595 master-0 kubenswrapper[26474]: I0223 13:29:42.574333 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb588f498-9ctlt" Feb 23 13:29:42.582577 master-0 kubenswrapper[26474]: I0223 13:29:42.581575 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:42.585984 master-0 kubenswrapper[26474]: I0223 13:29:42.585753 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83","Type":"ContainerStarted","Data":"6e1c5d237082b37127db7ca2e56e8d6cb62f25b6e07c18a99b65861e186d10af"} Feb 23 13:29:42.590154 master-0 kubenswrapper[26474]: I0223 13:29:42.588516 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf8cbb6b7-ll62g" Feb 23 13:29:42.595836 master-0 kubenswrapper[26474]: I0223 13:29:42.592857 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:42.595836 master-0 kubenswrapper[26474]: I0223 13:29:42.593941 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d5fa92de-9c73-449e-9f0d-abbd176f1eb5","Type":"ContainerDied","Data":"db52a498a865bf3d18b7d46b64dc177f138cd85ae1cbf8e8ba45260d091a1ad7"} Feb 23 13:29:42.656666 master-0 kubenswrapper[26474]: I0223 13:29:42.655477 26474 scope.go:117] "RemoveContainer" containerID="b041adfb5d532197cf22656a64d17c598f1422d06dc8e730c00ec6f7bc0c34b8" Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671181 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671316 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671448 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671500 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671521 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4stbh\" (UniqueName: \"kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671545 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671644 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.672535 master-0 kubenswrapper[26474]: I0223 13:29:42.671771 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"64431386-9bdd-4d6d-b469-cf1733e6ae01\" (UID: \"64431386-9bdd-4d6d-b469-cf1733e6ae01\") " Feb 23 13:29:42.675135 master-0 kubenswrapper[26474]: I0223 13:29:42.674300 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs" (OuterVolumeSpecName: "logs") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:42.675135 master-0 kubenswrapper[26474]: I0223 13:29:42.674821 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:42.679585 master-0 kubenswrapper[26474]: I0223 13:29:42.679529 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts" (OuterVolumeSpecName: "scripts") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.697158 master-0 kubenswrapper[26474]: I0223 13:29:42.697085 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh" (OuterVolumeSpecName: "kube-api-access-4stbh") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "kube-api-access-4stbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:42.714814 master-0 kubenswrapper[26474]: I0223 13:29:42.714757 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929" (OuterVolumeSpecName: "glance") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 13:29:42.719905 master-0 kubenswrapper[26474]: I0223 13:29:42.719842 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:42.740066 master-0 kubenswrapper[26474]: I0223 13:29:42.740011 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fb588f498-9ctlt"] Feb 23 13:29:42.747169 master-0 kubenswrapper[26474]: I0223 13:29:42.745453 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.766188 master-0 kubenswrapper[26474]: I0223 13:29:42.765442 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.874422747 podStartE2EDuration="16.765416302s" podCreationTimestamp="2026-02-23 13:29:26 +0000 UTC" firstStartedPulling="2026-02-23 13:29:27.377785098 +0000 UTC m=+889.224292775" lastFinishedPulling="2026-02-23 13:29:41.268778653 +0000 UTC m=+903.115286330" observedRunningTime="2026-02-23 13:29:42.671863351 +0000 UTC m=+904.518371048" watchObservedRunningTime="2026-02-23 13:29:42.765416302 +0000 UTC m=+904.611923989" Feb 23 13:29:42.812819 master-0 kubenswrapper[26474]: I0223 13:29:42.809111 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:42.812819 master-0 kubenswrapper[26474]: I0223 13:29:42.810470 26474 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.812819 master-0 kubenswrapper[26474]: I0223 13:29:42.810507 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.812819 master-0 kubenswrapper[26474]: I0223 13:29:42.810518 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4stbh\" (UniqueName: \"kubernetes.io/projected/64431386-9bdd-4d6d-b469-cf1733e6ae01-kube-api-access-4stbh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.812819 master-0 kubenswrapper[26474]: I0223 13:29:42.810527 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64431386-9bdd-4d6d-b469-cf1733e6ae01-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.819682 master-0 kubenswrapper[26474]: I0223 13:29:42.817203 26474 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") on node \"master-0\" " Feb 23 13:29:42.819682 master-0 kubenswrapper[26474]: I0223 13:29:42.817243 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.827759 master-0 kubenswrapper[26474]: W0223 13:29:42.825268 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc631041f_70ef_485e_9113_f33f737f91f7.slice/crio-093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd WatchSource:0}: Error finding container 093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd: Status 404 returned error can't find the container with id 093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd Feb 23 13:29:42.827759 master-0 kubenswrapper[26474]: W0223 13:29:42.825971 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9fcecfd_b548_47da_800d_24deb9834fa7.slice/crio-92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f WatchSource:0}: Error finding container 92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f: Status 404 returned error can't find the container with id 92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f Feb 23 13:29:42.835394 master-0 kubenswrapper[26474]: W0223 13:29:42.834709 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd82ed9_d70c_4c60_afb4_63db5a57aa79.slice/crio-66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12 WatchSource:0}: Error finding container 66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12: Status 404 returned error can't find the container with id 66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12 Feb 23 13:29:42.840859 master-0 kubenswrapper[26474]: I0223 13:29:42.840602 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.844168 master-0 kubenswrapper[26474]: I0223 13:29:42.842765 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data" (OuterVolumeSpecName: "config-data") pod "64431386-9bdd-4d6d-b469-cf1733e6ae01" (UID: "64431386-9bdd-4d6d-b469-cf1733e6ae01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:42.870353 master-0 kubenswrapper[26474]: I0223 13:29:42.856885 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 13:29:42.870353 master-0 kubenswrapper[26474]: I0223 13:29:42.857097 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 13:29:42.870353 master-0 kubenswrapper[26474]: I0223 13:29:42.865694 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-6cf8cbb6b7-ll62g"] Feb 23 13:29:42.870353 master-0 kubenswrapper[26474]: I0223 13:29:42.866287 26474 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 13:29:42.882359 master-0 kubenswrapper[26474]: I0223 13:29:42.874106 26474 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98" (UniqueName: "kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929") on node "master-0" Feb 23 13:29:42.882359 master-0 kubenswrapper[26474]: I0223 13:29:42.878518 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-222a-account-create-update-6g2mv"] Feb 23 13:29:42.896354 master-0 kubenswrapper[26474]: I0223 13:29:42.890459 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-whpzb"] Feb 23 13:29:42.896354 master-0 kubenswrapper[26474]: I0223 13:29:42.890914 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 13:29:42.925356 master-0 kubenswrapper[26474]: I0223 13:29:42.919896 26474 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.925356 master-0 kubenswrapper[26474]: I0223 13:29:42.919977 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64431386-9bdd-4d6d-b469-cf1733e6ae01-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.925356 master-0 kubenswrapper[26474]: I0223 13:29:42.920006 26474 reconciler_common.go:293] "Volume detached for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:42.947351 master-0 kubenswrapper[26474]: I0223 13:29:42.946630 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:42.973461 master-0 kubenswrapper[26474]: I0223 13:29:42.966564 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7274-account-create-update-2r52m"] Feb 23 13:29:43.009539 master-0 kubenswrapper[26474]: I0223 13:29:43.009484 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:43.029788 master-0 kubenswrapper[26474]: I0223 13:29:43.022659 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-mrlgq"] Feb 23 13:29:43.041543 master-0 kubenswrapper[26474]: I0223 13:29:43.040761 26474 scope.go:117] "RemoveContainer" containerID="45fb6aa6006145e1f9daf21c319aa115ba9b32e777e96c9dbfcb7c0fc0c9514b" Feb 23 13:29:43.056580 master-0 kubenswrapper[26474]: I0223 13:29:43.055251 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qw4db"] Feb 23 13:29:43.070424 master-0 kubenswrapper[26474]: I0223 13:29:43.070370 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-46be-account-create-update-v6qrd"] Feb 23 13:29:43.082679 master-0 kubenswrapper[26474]: I0223 13:29:43.081879 26474 scope.go:117] "RemoveContainer" containerID="587b8669d3eca70a07d5c825bbfd0552cbca861825e7d9f54124c8f4251e8d19" Feb 23 13:29:43.082679 master-0 kubenswrapper[26474]: I0223 13:29:43.082620 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:43.083452 master-0 kubenswrapper[26474]: E0223 13:29:43.083401 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-httpd" Feb 23 13:29:43.083452 master-0 kubenswrapper[26474]: I0223 13:29:43.083425 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-httpd" Feb 23 13:29:43.083536 master-0 kubenswrapper[26474]: E0223 13:29:43.083499 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.083536 master-0 kubenswrapper[26474]: I0223 13:29:43.083511 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.083536 master-0 kubenswrapper[26474]: E0223 13:29:43.083527 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-log" Feb 23 13:29:43.083536 master-0 kubenswrapper[26474]: I0223 13:29:43.083534 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-log" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083548 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-api" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083556 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-api" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083576 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="init" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083584 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="init" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083596 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083602 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083616 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-log" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083622 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-log" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083635 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-httpd" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083641 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-httpd" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083653 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-httpd" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083659 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-httpd" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: E0223 13:29:43.083676 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api-log" Feb 23 13:29:43.083674 master-0 kubenswrapper[26474]: I0223 13:29:43.083684 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api-log" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.083957 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.083989 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-log" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.084000 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api-log" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.084015 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" containerName="glance-httpd" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.084035 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-log" Feb 23 13:29:43.084049 master-0 kubenswrapper[26474]: I0223 13:29:43.084051 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" containerName="glance-httpd" Feb 23 13:29:43.084892 master-0 kubenswrapper[26474]: I0223 13:29:43.084806 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" containerName="ironic-api" Feb 23 13:29:43.084945 master-0 kubenswrapper[26474]: I0223 13:29:43.084910 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-httpd" Feb 23 13:29:43.084945 master-0 kubenswrapper[26474]: I0223 13:29:43.084935 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" containerName="neutron-api" Feb 23 13:29:43.093476 master-0 kubenswrapper[26474]: I0223 13:29:43.093414 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.106362 master-0 kubenswrapper[26474]: I0223 13:29:43.098636 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-default-internal-config-data" Feb 23 13:29:43.106362 master-0 kubenswrapper[26474]: I0223 13:29:43.101231 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 13:29:43.119686 master-0 kubenswrapper[26474]: I0223 13:29:43.114731 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:43.130398 master-0 kubenswrapper[26474]: I0223 13:29:43.130305 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130614 master-0 kubenswrapper[26474]: I0223 13:29:43.130461 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130614 master-0 kubenswrapper[26474]: I0223 13:29:43.130488 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130614 master-0 kubenswrapper[26474]: I0223 13:29:43.130525 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130614 master-0 kubenswrapper[26474]: I0223 13:29:43.130551 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bfx\" (UniqueName: \"kubernetes.io/projected/d053834d-7877-4283-b937-693f20d0c6a4-kube-api-access-w7bfx\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130790 master-0 kubenswrapper[26474]: I0223 13:29:43.130619 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130790 master-0 kubenswrapper[26474]: I0223 13:29:43.130643 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.130790 master-0 kubenswrapper[26474]: I0223 13:29:43.130681 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.167791 master-0 kubenswrapper[26474]: I0223 13:29:43.166210 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bqkcr"] Feb 23 13:29:43.233210 master-0 kubenswrapper[26474]: I0223 13:29:43.233150 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.233396 master-0 kubenswrapper[26474]: I0223 13:29:43.233305 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.233484 master-0 kubenswrapper[26474]: I0223 13:29:43.233458 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.233782 master-0 kubenswrapper[26474]: I0223 13:29:43.233725 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.233867 master-0 kubenswrapper[26474]: I0223 13:29:43.233840 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bfx\" (UniqueName: \"kubernetes.io/projected/d053834d-7877-4283-b937-693f20d0c6a4-kube-api-access-w7bfx\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.234446 master-0 kubenswrapper[26474]: I0223 13:29:43.234108 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.234446 master-0 kubenswrapper[26474]: I0223 13:29:43.234142 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.234446 master-0 kubenswrapper[26474]: I0223 13:29:43.234195 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-httpd-run\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.234446 master-0 kubenswrapper[26474]: I0223 13:29:43.234217 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.234945 master-0 kubenswrapper[26474]: I0223 13:29:43.234881 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d053834d-7877-4283-b937-693f20d0c6a4-logs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.247487 master-0 kubenswrapper[26474]: I0223 13:29:43.238839 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:29:43.247487 master-0 kubenswrapper[26474]: I0223 13:29:43.238933 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0e6f58ed848c8ffad14a494b64a381134d4f7ca50c612d6ade3ceaa3a801c011/globalmount\"" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.247487 master-0 kubenswrapper[26474]: I0223 13:29:43.239892 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-scripts\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.247487 master-0 kubenswrapper[26474]: I0223 13:29:43.240269 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-config-data\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.247487 master-0 kubenswrapper[26474]: I0223 13:29:43.240561 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-internal-tls-certs\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.249689 master-0 kubenswrapper[26474]: I0223 13:29:43.249272 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053834d-7877-4283-b937-693f20d0c6a4-combined-ca-bundle\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.253239 master-0 kubenswrapper[26474]: I0223 13:29:43.253196 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bfx\" (UniqueName: \"kubernetes.io/projected/d053834d-7877-4283-b937-693f20d0c6a4-kube-api-access-w7bfx\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:43.613263 master-0 kubenswrapper[26474]: I0223 13:29:43.613174 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qw4db" event={"ID":"5cd82ed9-d70c-4c60-afb4-63db5a57aa79","Type":"ContainerStarted","Data":"c47d48df2c815f68cfcc77cbabf1749afe3bd7eb1752d17238b43114ab44679a"} Feb 23 13:29:43.613263 master-0 kubenswrapper[26474]: I0223 13:29:43.613256 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qw4db" event={"ID":"5cd82ed9-d70c-4c60-afb4-63db5a57aa79","Type":"ContainerStarted","Data":"66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12"} Feb 23 13:29:43.616479 master-0 kubenswrapper[26474]: I0223 13:29:43.616408 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7274-account-create-update-2r52m" event={"ID":"c631041f-70ef-485e-9113-f33f737f91f7","Type":"ContainerStarted","Data":"1dc91fb386814feef2ef798e000fbc2db1b1eafc4600ab376d4db128bbc7d479"} Feb 23 13:29:43.616543 master-0 kubenswrapper[26474]: I0223 13:29:43.616486 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7274-account-create-update-2r52m" event={"ID":"c631041f-70ef-485e-9113-f33f737f91f7","Type":"ContainerStarted","Data":"093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd"} Feb 23 13:29:43.624724 master-0 kubenswrapper[26474]: I0223 13:29:43.624643 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-mrlgq" event={"ID":"e9fcecfd-b548-47da-800d-24deb9834fa7","Type":"ContainerStarted","Data":"92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f"} Feb 23 13:29:43.626744 master-0 kubenswrapper[26474]: I0223 13:29:43.626690 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" event={"ID":"3071f1bd-cdeb-433e-bd4b-a02190489f95","Type":"ContainerStarted","Data":"568c5f81feff9fcc93f089d2c09c00795d8eb698a8c61eb9d0531dd29ea9e5e7"} Feb 23 13:29:43.626744 master-0 kubenswrapper[26474]: I0223 13:29:43.626724 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" event={"ID":"3071f1bd-cdeb-433e-bd4b-a02190489f95","Type":"ContainerStarted","Data":"0ab482afad7f6469273bb76a0b8e8f6b807e7da1e470ffcbf111cf2096a5c38e"} Feb 23 13:29:43.629648 master-0 kubenswrapper[26474]: I0223 13:29:43.629610 26474 generic.go:334] "Generic (PLEG): container finished" podID="882d9e69-bb35-4914-a2ef-42fb05306b3a" containerID="233d3694041ac1e9f1c1efb28014c8ead8c91353640f6408f7963cc671b93a53" exitCode=0 Feb 23 13:29:43.629789 master-0 kubenswrapper[26474]: I0223 13:29:43.629695 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-whpzb" event={"ID":"882d9e69-bb35-4914-a2ef-42fb05306b3a","Type":"ContainerDied","Data":"233d3694041ac1e9f1c1efb28014c8ead8c91353640f6408f7963cc671b93a53"} Feb 23 13:29:43.629853 master-0 kubenswrapper[26474]: I0223 13:29:43.629792 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-whpzb" event={"ID":"882d9e69-bb35-4914-a2ef-42fb05306b3a","Type":"ContainerStarted","Data":"bd024d24e170f053a332460fd33871d83c732aa43a215fa29f072bfded38f77c"} Feb 23 13:29:43.632189 master-0 kubenswrapper[26474]: I0223 13:29:43.632142 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqkcr" event={"ID":"199a7a01-a57b-473d-a293-4b88a1a946c8","Type":"ContainerStarted","Data":"487bf583bb1ab0684e719d552dd5bdadd70305ca2de4f87e0b0a368a4e97999a"} Feb 23 13:29:43.632189 master-0 kubenswrapper[26474]: I0223 13:29:43.632182 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqkcr" event={"ID":"199a7a01-a57b-473d-a293-4b88a1a946c8","Type":"ContainerStarted","Data":"e639d11db670984068b1abfad5f74ded9ead6fe8484b749b981f5d04a68c6719"} Feb 23 13:29:43.655857 master-0 kubenswrapper[26474]: I0223 13:29:43.655716 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-qw4db" podStartSLOduration=7.655691238 podStartE2EDuration="7.655691238s" podCreationTimestamp="2026-02-23 13:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.633656281 +0000 UTC m=+905.480163958" watchObservedRunningTime="2026-02-23 13:29:43.655691238 +0000 UTC m=+905.502198915" Feb 23 13:29:43.661721 master-0 kubenswrapper[26474]: I0223 13:29:43.661666 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"64431386-9bdd-4d6d-b469-cf1733e6ae01","Type":"ContainerDied","Data":"601cf432990aaa7ea645f03e3e792058e12099dcb58e353d26074960326ab256"} Feb 23 13:29:43.662055 master-0 kubenswrapper[26474]: I0223 13:29:43.662037 26474 scope.go:117] "RemoveContainer" containerID="0f29bec70791b04a1f83e8f9002bf09630c2b9c91ee0fb472b6424c7b608ae1b" Feb 23 13:29:43.662254 master-0 kubenswrapper[26474]: I0223 13:29:43.662077 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.678595 master-0 kubenswrapper[26474]: I0223 13:29:43.678470 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46be-account-create-update-v6qrd" event={"ID":"c2bb7e81-33b2-457d-98f9-0a9114ce13f4","Type":"ContainerStarted","Data":"e0fe901b02a943589336827d59407886fbcfb85ddd478352d491183cfee4f84a"} Feb 23 13:29:43.678595 master-0 kubenswrapper[26474]: I0223 13:29:43.678594 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46be-account-create-update-v6qrd" event={"ID":"c2bb7e81-33b2-457d-98f9-0a9114ce13f4","Type":"ContainerStarted","Data":"d675b69b48cc1546cf3ef593d85f01f0cb1a967e5d69fc185b6dc0951721c363"} Feb 23 13:29:43.688393 master-0 kubenswrapper[26474]: I0223 13:29:43.688321 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7274-account-create-update-2r52m" podStartSLOduration=7.68818902 podStartE2EDuration="7.68818902s" podCreationTimestamp="2026-02-23 13:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.655621726 +0000 UTC m=+905.502129413" watchObservedRunningTime="2026-02-23 13:29:43.68818902 +0000 UTC m=+905.534696697" Feb 23 13:29:43.701916 master-0 kubenswrapper[26474]: I0223 13:29:43.701870 26474 scope.go:117] "RemoveContainer" containerID="5d462ace45b7339e633e21bbec2030637d88bd78f71b07fe7e58244a384422c7" Feb 23 13:29:43.716812 master-0 kubenswrapper[26474]: I0223 13:29:43.716728 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bqkcr" podStartSLOduration=7.716705646 podStartE2EDuration="7.716705646s" podCreationTimestamp="2026-02-23 13:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.673051871 +0000 UTC m=+905.519559548" watchObservedRunningTime="2026-02-23 13:29:43.716705646 +0000 UTC m=+905.563213323" Feb 23 13:29:43.759685 master-0 kubenswrapper[26474]: I0223 13:29:43.759504 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" podStartSLOduration=7.759475438 podStartE2EDuration="7.759475438s" podCreationTimestamp="2026-02-23 13:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.720382585 +0000 UTC m=+905.566890272" watchObservedRunningTime="2026-02-23 13:29:43.759475438 +0000 UTC m=+905.605983115" Feb 23 13:29:43.787698 master-0 kubenswrapper[26474]: I0223 13:29:43.787582 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:43.804752 master-0 kubenswrapper[26474]: I0223 13:29:43.804692 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:43.812833 master-0 kubenswrapper[26474]: I0223 13:29:43.811568 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-46be-account-create-update-v6qrd" podStartSLOduration=7.811546588 podStartE2EDuration="7.811546588s" podCreationTimestamp="2026-02-23 13:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.765862244 +0000 UTC m=+905.612369921" watchObservedRunningTime="2026-02-23 13:29:43.811546588 +0000 UTC m=+905.658054265" Feb 23 13:29:43.833027 master-0 kubenswrapper[26474]: I0223 13:29:43.828812 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:43.833027 master-0 kubenswrapper[26474]: I0223 13:29:43.830871 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.835092 master-0 kubenswrapper[26474]: I0223 13:29:43.835073 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4fec4-default-external-config-data" Feb 23 13:29:43.835850 master-0 kubenswrapper[26474]: I0223 13:29:43.835835 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:29:43.841603 master-0 kubenswrapper[26474]: I0223 13:29:43.841165 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:43.863194 master-0 kubenswrapper[26474]: I0223 13:29:43.863124 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863464 master-0 kubenswrapper[26474]: I0223 13:29:43.863233 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863464 master-0 kubenswrapper[26474]: I0223 13:29:43.863277 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863464 master-0 kubenswrapper[26474]: I0223 13:29:43.863308 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863464 master-0 kubenswrapper[26474]: I0223 13:29:43.863364 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863464 master-0 kubenswrapper[26474]: I0223 13:29:43.863420 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863682 master-0 kubenswrapper[26474]: I0223 13:29:43.863464 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829ps\" (UniqueName: \"kubernetes.io/projected/4392daca-d648-4740-b6ef-62a95c306245-kube-api-access-829ps\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.863682 master-0 kubenswrapper[26474]: I0223 13:29:43.863657 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965293 master-0 kubenswrapper[26474]: I0223 13:29:43.965190 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965293 master-0 kubenswrapper[26474]: I0223 13:29:43.965273 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965310 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965389 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965417 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965446 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965493 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.965570 master-0 kubenswrapper[26474]: I0223 13:29:43.965534 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829ps\" (UniqueName: \"kubernetes.io/projected/4392daca-d648-4740-b6ef-62a95c306245-kube-api-access-829ps\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.969829 master-0 kubenswrapper[26474]: I0223 13:29:43.966995 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-logs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.969829 master-0 kubenswrapper[26474]: I0223 13:29:43.967264 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4392daca-d648-4740-b6ef-62a95c306245-httpd-run\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.969829 master-0 kubenswrapper[26474]: I0223 13:29:43.969799 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-public-tls-certs\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.970882 master-0 kubenswrapper[26474]: I0223 13:29:43.970850 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-scripts\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.971660 master-0 kubenswrapper[26474]: I0223 13:29:43.971619 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-config-data\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.971820 master-0 kubenswrapper[26474]: I0223 13:29:43.971772 26474 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:29:43.971902 master-0 kubenswrapper[26474]: I0223 13:29:43.971846 26474 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1725139a53e08998fd353fef3de63f88daa57a9b0265ed3b7798973e2d5ac396/globalmount\"" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.975389 master-0 kubenswrapper[26474]: I0223 13:29:43.974886 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4392daca-d648-4740-b6ef-62a95c306245-combined-ca-bundle\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:43.984539 master-0 kubenswrapper[26474]: I0223 13:29:43.984487 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829ps\" (UniqueName: \"kubernetes.io/projected/4392daca-d648-4740-b6ef-62a95c306245-kube-api-access-829ps\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:44.370627 master-0 kubenswrapper[26474]: I0223 13:29:44.370549 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-944b9e95-3688-4337-a744-6330aac8e963\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3d4fee37-648a-419d-824d-73180ab1f2dc\") pod \"glance-4fec4-default-internal-api-0\" (UID: \"d053834d-7877-4283-b937-693f20d0c6a4\") " pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:44.413310 master-0 kubenswrapper[26474]: I0223 13:29:44.413253 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64431386-9bdd-4d6d-b469-cf1733e6ae01" path="/var/lib/kubelet/pods/64431386-9bdd-4d6d-b469-cf1733e6ae01/volumes" Feb 23 13:29:44.414024 master-0 kubenswrapper[26474]: I0223 13:29:44.413994 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7cb4e9-6c67-409d-8446-cce05d6b8fbb" path="/var/lib/kubelet/pods/6e7cb4e9-6c67-409d-8446-cce05d6b8fbb/volumes" Feb 23 13:29:44.416871 master-0 kubenswrapper[26474]: I0223 13:29:44.416731 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c6e3b6-c201-44f9-9100-819b15b552f4" path="/var/lib/kubelet/pods/d0c6e3b6-c201-44f9-9100-819b15b552f4/volumes" Feb 23 13:29:44.418171 master-0 kubenswrapper[26474]: I0223 13:29:44.418135 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5fa92de-9c73-449e-9f0d-abbd176f1eb5" path="/var/lib/kubelet/pods/d5fa92de-9c73-449e-9f0d-abbd176f1eb5/volumes" Feb 23 13:29:44.617956 master-0 kubenswrapper[26474]: I0223 13:29:44.617890 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:44.758694 master-0 kubenswrapper[26474]: I0223 13:29:44.758632 26474 generic.go:334] "Generic (PLEG): container finished" podID="c2bb7e81-33b2-457d-98f9-0a9114ce13f4" containerID="e0fe901b02a943589336827d59407886fbcfb85ddd478352d491183cfee4f84a" exitCode=0 Feb 23 13:29:44.764381 master-0 kubenswrapper[26474]: I0223 13:29:44.759132 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46be-account-create-update-v6qrd" event={"ID":"c2bb7e81-33b2-457d-98f9-0a9114ce13f4","Type":"ContainerDied","Data":"e0fe901b02a943589336827d59407886fbcfb85ddd478352d491183cfee4f84a"} Feb 23 13:29:44.766938 master-0 kubenswrapper[26474]: I0223 13:29:44.766858 26474 generic.go:334] "Generic (PLEG): container finished" podID="3071f1bd-cdeb-433e-bd4b-a02190489f95" containerID="568c5f81feff9fcc93f089d2c09c00795d8eb698a8c61eb9d0531dd29ea9e5e7" exitCode=0 Feb 23 13:29:44.767657 master-0 kubenswrapper[26474]: I0223 13:29:44.767503 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" event={"ID":"3071f1bd-cdeb-433e-bd4b-a02190489f95","Type":"ContainerDied","Data":"568c5f81feff9fcc93f089d2c09c00795d8eb698a8c61eb9d0531dd29ea9e5e7"} Feb 23 13:29:44.770439 master-0 kubenswrapper[26474]: I0223 13:29:44.769925 26474 generic.go:334] "Generic (PLEG): container finished" podID="5cd82ed9-d70c-4c60-afb4-63db5a57aa79" containerID="c47d48df2c815f68cfcc77cbabf1749afe3bd7eb1752d17238b43114ab44679a" exitCode=0 Feb 23 13:29:44.770439 master-0 kubenswrapper[26474]: I0223 13:29:44.770028 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qw4db" event={"ID":"5cd82ed9-d70c-4c60-afb4-63db5a57aa79","Type":"ContainerDied","Data":"c47d48df2c815f68cfcc77cbabf1749afe3bd7eb1752d17238b43114ab44679a"} Feb 23 13:29:44.774367 master-0 kubenswrapper[26474]: I0223 13:29:44.771785 26474 generic.go:334] "Generic (PLEG): container finished" podID="c631041f-70ef-485e-9113-f33f737f91f7" containerID="1dc91fb386814feef2ef798e000fbc2db1b1eafc4600ab376d4db128bbc7d479" exitCode=0 Feb 23 13:29:44.774367 master-0 kubenswrapper[26474]: I0223 13:29:44.771849 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7274-account-create-update-2r52m" event={"ID":"c631041f-70ef-485e-9113-f33f737f91f7","Type":"ContainerDied","Data":"1dc91fb386814feef2ef798e000fbc2db1b1eafc4600ab376d4db128bbc7d479"} Feb 23 13:29:44.774367 master-0 kubenswrapper[26474]: I0223 13:29:44.773289 26474 generic.go:334] "Generic (PLEG): container finished" podID="199a7a01-a57b-473d-a293-4b88a1a946c8" containerID="487bf583bb1ab0684e719d552dd5bdadd70305ca2de4f87e0b0a368a4e97999a" exitCode=0 Feb 23 13:29:44.774367 master-0 kubenswrapper[26474]: I0223 13:29:44.773501 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqkcr" event={"ID":"199a7a01-a57b-473d-a293-4b88a1a946c8","Type":"ContainerDied","Data":"487bf583bb1ab0684e719d552dd5bdadd70305ca2de4f87e0b0a368a4e97999a"} Feb 23 13:29:45.210504 master-0 kubenswrapper[26474]: I0223 13:29:45.210456 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0525ee4d-1ebb-40ce-b661-458e34ef4d98\" (UniqueName: \"kubernetes.io/csi/topolvm.io^804b8525-1a35-4992-967f-a0bb03419929\") pod \"glance-4fec4-default-external-api-0\" (UID: \"4392daca-d648-4740-b6ef-62a95c306245\") " pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:45.290144 master-0 kubenswrapper[26474]: W0223 13:29:45.290088 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd053834d_7877_4283_b937_693f20d0c6a4.slice/crio-63666ea5a3a1ec9aeb41554e95eb1472628135471ddc204dae82df6f091374f0 WatchSource:0}: Error finding container 63666ea5a3a1ec9aeb41554e95eb1472628135471ddc204dae82df6f091374f0: Status 404 returned error can't find the container with id 63666ea5a3a1ec9aeb41554e95eb1472628135471ddc204dae82df6f091374f0 Feb 23 13:29:45.293010 master-0 kubenswrapper[26474]: I0223 13:29:45.292967 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-internal-api-0"] Feb 23 13:29:45.351424 master-0 kubenswrapper[26474]: I0223 13:29:45.350336 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:45.452806 master-0 kubenswrapper[26474]: I0223 13:29:45.452658 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:45.636426 master-0 kubenswrapper[26474]: I0223 13:29:45.633875 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6k8d\" (UniqueName: \"kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d\") pod \"882d9e69-bb35-4914-a2ef-42fb05306b3a\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " Feb 23 13:29:45.636426 master-0 kubenswrapper[26474]: I0223 13:29:45.633982 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts\") pod \"882d9e69-bb35-4914-a2ef-42fb05306b3a\" (UID: \"882d9e69-bb35-4914-a2ef-42fb05306b3a\") " Feb 23 13:29:45.636426 master-0 kubenswrapper[26474]: I0223 13:29:45.635565 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "882d9e69-bb35-4914-a2ef-42fb05306b3a" (UID: "882d9e69-bb35-4914-a2ef-42fb05306b3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:45.637079 master-0 kubenswrapper[26474]: I0223 13:29:45.637035 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882d9e69-bb35-4914-a2ef-42fb05306b3a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:45.642362 master-0 kubenswrapper[26474]: I0223 13:29:45.640564 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d" (OuterVolumeSpecName: "kube-api-access-m6k8d") pod "882d9e69-bb35-4914-a2ef-42fb05306b3a" (UID: "882d9e69-bb35-4914-a2ef-42fb05306b3a"). InnerVolumeSpecName "kube-api-access-m6k8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:45.739946 master-0 kubenswrapper[26474]: I0223 13:29:45.739889 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6k8d\" (UniqueName: \"kubernetes.io/projected/882d9e69-bb35-4914-a2ef-42fb05306b3a-kube-api-access-m6k8d\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:45.807977 master-0 kubenswrapper[26474]: I0223 13:29:45.807917 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d053834d-7877-4283-b937-693f20d0c6a4","Type":"ContainerStarted","Data":"63666ea5a3a1ec9aeb41554e95eb1472628135471ddc204dae82df6f091374f0"} Feb 23 13:29:45.810660 master-0 kubenswrapper[26474]: I0223 13:29:45.810627 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-whpzb" Feb 23 13:29:45.810831 master-0 kubenswrapper[26474]: I0223 13:29:45.810660 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-whpzb" event={"ID":"882d9e69-bb35-4914-a2ef-42fb05306b3a","Type":"ContainerDied","Data":"bd024d24e170f053a332460fd33871d83c732aa43a215fa29f072bfded38f77c"} Feb 23 13:29:45.810831 master-0 kubenswrapper[26474]: I0223 13:29:45.810711 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd024d24e170f053a332460fd33871d83c732aa43a215fa29f072bfded38f77c" Feb 23 13:29:46.019872 master-0 kubenswrapper[26474]: I0223 13:29:46.019777 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4fec4-default-external-api-0"] Feb 23 13:29:46.828993 master-0 kubenswrapper[26474]: I0223 13:29:46.828590 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d053834d-7877-4283-b937-693f20d0c6a4","Type":"ContainerStarted","Data":"75213ac0102c8532caeacb1cd7aa43d260f9d7c48ad272cb377a905fea7a2c2c"} Feb 23 13:29:46.921565 master-0 kubenswrapper[26474]: W0223 13:29:46.921502 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4392daca_d648_4740_b6ef_62a95c306245.slice/crio-96c7a1c4bbf69eebff8f64bc11a83ee633fb6231be2b9a532258ac01da7fc0de WatchSource:0}: Error finding container 96c7a1c4bbf69eebff8f64bc11a83ee633fb6231be2b9a532258ac01da7fc0de: Status 404 returned error can't find the container with id 96c7a1c4bbf69eebff8f64bc11a83ee633fb6231be2b9a532258ac01da7fc0de Feb 23 13:29:47.126460 master-0 kubenswrapper[26474]: I0223 13:29:47.124862 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:47.165364 master-0 kubenswrapper[26474]: I0223 13:29:47.163511 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:47.165364 master-0 kubenswrapper[26474]: I0223 13:29:47.164711 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222453 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvhd6\" (UniqueName: \"kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6\") pod \"3071f1bd-cdeb-433e-bd4b-a02190489f95\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222558 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts\") pod \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222632 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbp4v\" (UniqueName: \"kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v\") pod \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222794 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcm8m\" (UniqueName: \"kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m\") pod \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\" (UID: \"c2bb7e81-33b2-457d-98f9-0a9114ce13f4\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222844 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts\") pod \"3071f1bd-cdeb-433e-bd4b-a02190489f95\" (UID: \"3071f1bd-cdeb-433e-bd4b-a02190489f95\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.222995 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts\") pod \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\" (UID: \"5cd82ed9-d70c-4c60-afb4-63db5a57aa79\") " Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.224582 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5cd82ed9-d70c-4c60-afb4-63db5a57aa79" (UID: "5cd82ed9-d70c-4c60-afb4-63db5a57aa79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.225548 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3071f1bd-cdeb-433e-bd4b-a02190489f95" (UID: "3071f1bd-cdeb-433e-bd4b-a02190489f95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:47.226392 master-0 kubenswrapper[26474]: I0223 13:29:47.225939 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2bb7e81-33b2-457d-98f9-0a9114ce13f4" (UID: "c2bb7e81-33b2-457d-98f9-0a9114ce13f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:47.231149 master-0 kubenswrapper[26474]: I0223 13:29:47.228616 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m" (OuterVolumeSpecName: "kube-api-access-jcm8m") pod "c2bb7e81-33b2-457d-98f9-0a9114ce13f4" (UID: "c2bb7e81-33b2-457d-98f9-0a9114ce13f4"). InnerVolumeSpecName "kube-api-access-jcm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:47.261378 master-0 kubenswrapper[26474]: I0223 13:29:47.256880 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v" (OuterVolumeSpecName: "kube-api-access-tbp4v") pod "5cd82ed9-d70c-4c60-afb4-63db5a57aa79" (UID: "5cd82ed9-d70c-4c60-afb4-63db5a57aa79"). InnerVolumeSpecName "kube-api-access-tbp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:47.270705 master-0 kubenswrapper[26474]: I0223 13:29:47.270607 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6" (OuterVolumeSpecName: "kube-api-access-jvhd6") pod "3071f1bd-cdeb-433e-bd4b-a02190489f95" (UID: "3071f1bd-cdeb-433e-bd4b-a02190489f95"). InnerVolumeSpecName "kube-api-access-jvhd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:47.315372 master-0 kubenswrapper[26474]: I0223 13:29:47.315258 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:47.322898 master-0 kubenswrapper[26474]: I0223 13:29:47.322854 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:47.326417 master-0 kubenswrapper[26474]: I0223 13:29:47.326379 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvhd6\" (UniqueName: \"kubernetes.io/projected/3071f1bd-cdeb-433e-bd4b-a02190489f95-kube-api-access-jvhd6\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.326417 master-0 kubenswrapper[26474]: I0223 13:29:47.326414 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.326507 master-0 kubenswrapper[26474]: I0223 13:29:47.326425 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbp4v\" (UniqueName: \"kubernetes.io/projected/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-kube-api-access-tbp4v\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.326507 master-0 kubenswrapper[26474]: I0223 13:29:47.326434 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcm8m\" (UniqueName: \"kubernetes.io/projected/c2bb7e81-33b2-457d-98f9-0a9114ce13f4-kube-api-access-jcm8m\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.326507 master-0 kubenswrapper[26474]: I0223 13:29:47.326444 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3071f1bd-cdeb-433e-bd4b-a02190489f95-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.326507 master-0 kubenswrapper[26474]: I0223 13:29:47.326453 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd82ed9-d70c-4c60-afb4-63db5a57aa79-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.427844 master-0 kubenswrapper[26474]: I0223 13:29:47.427788 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnh98\" (UniqueName: \"kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98\") pod \"c631041f-70ef-485e-9113-f33f737f91f7\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " Feb 23 13:29:47.427994 master-0 kubenswrapper[26474]: I0223 13:29:47.427904 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts\") pod \"199a7a01-a57b-473d-a293-4b88a1a946c8\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " Feb 23 13:29:47.428115 master-0 kubenswrapper[26474]: I0223 13:29:47.428095 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv9w6\" (UniqueName: \"kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6\") pod \"199a7a01-a57b-473d-a293-4b88a1a946c8\" (UID: \"199a7a01-a57b-473d-a293-4b88a1a946c8\") " Feb 23 13:29:47.428289 master-0 kubenswrapper[26474]: I0223 13:29:47.428248 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts\") pod \"c631041f-70ef-485e-9113-f33f737f91f7\" (UID: \"c631041f-70ef-485e-9113-f33f737f91f7\") " Feb 23 13:29:47.428849 master-0 kubenswrapper[26474]: I0223 13:29:47.428797 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "199a7a01-a57b-473d-a293-4b88a1a946c8" (UID: "199a7a01-a57b-473d-a293-4b88a1a946c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:47.429319 master-0 kubenswrapper[26474]: I0223 13:29:47.429279 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c631041f-70ef-485e-9113-f33f737f91f7" (UID: "c631041f-70ef-485e-9113-f33f737f91f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:47.435051 master-0 kubenswrapper[26474]: I0223 13:29:47.431448 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98" (OuterVolumeSpecName: "kube-api-access-jnh98") pod "c631041f-70ef-485e-9113-f33f737f91f7" (UID: "c631041f-70ef-485e-9113-f33f737f91f7"). InnerVolumeSpecName "kube-api-access-jnh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:47.435051 master-0 kubenswrapper[26474]: I0223 13:29:47.432297 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6" (OuterVolumeSpecName: "kube-api-access-kv9w6") pod "199a7a01-a57b-473d-a293-4b88a1a946c8" (UID: "199a7a01-a57b-473d-a293-4b88a1a946c8"). InnerVolumeSpecName "kube-api-access-kv9w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:47.539095 master-0 kubenswrapper[26474]: I0223 13:29:47.539035 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv9w6\" (UniqueName: \"kubernetes.io/projected/199a7a01-a57b-473d-a293-4b88a1a946c8-kube-api-access-kv9w6\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.539095 master-0 kubenswrapper[26474]: I0223 13:29:47.539084 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c631041f-70ef-485e-9113-f33f737f91f7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.539095 master-0 kubenswrapper[26474]: I0223 13:29:47.539095 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnh98\" (UniqueName: \"kubernetes.io/projected/c631041f-70ef-485e-9113-f33f737f91f7-kube-api-access-jnh98\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.539095 master-0 kubenswrapper[26474]: I0223 13:29:47.539104 26474 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199a7a01-a57b-473d-a293-4b88a1a946c8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:47.849022 master-0 kubenswrapper[26474]: I0223 13:29:47.848976 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" event={"ID":"3071f1bd-cdeb-433e-bd4b-a02190489f95","Type":"ContainerDied","Data":"0ab482afad7f6469273bb76a0b8e8f6b807e7da1e470ffcbf111cf2096a5c38e"} Feb 23 13:29:47.849611 master-0 kubenswrapper[26474]: I0223 13:29:47.849585 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab482afad7f6469273bb76a0b8e8f6b807e7da1e470ffcbf111cf2096a5c38e" Feb 23 13:29:47.849721 master-0 kubenswrapper[26474]: I0223 13:29:47.849517 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-222a-account-create-update-6g2mv" Feb 23 13:29:47.852003 master-0 kubenswrapper[26474]: I0223 13:29:47.851972 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qw4db" Feb 23 13:29:47.852087 master-0 kubenswrapper[26474]: I0223 13:29:47.851991 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qw4db" event={"ID":"5cd82ed9-d70c-4c60-afb4-63db5a57aa79","Type":"ContainerDied","Data":"66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12"} Feb 23 13:29:47.852087 master-0 kubenswrapper[26474]: I0223 13:29:47.852038 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e40a285e120748602fe3dcdb563ecd6f5addea2f9adde75245c437a4ee8e12" Feb 23 13:29:47.862693 master-0 kubenswrapper[26474]: I0223 13:29:47.862603 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7274-account-create-update-2r52m" event={"ID":"c631041f-70ef-485e-9113-f33f737f91f7","Type":"ContainerDied","Data":"093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd"} Feb 23 13:29:47.862693 master-0 kubenswrapper[26474]: I0223 13:29:47.862676 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093e488fe4090f334ef82ec89a0b7e12f4a37d43a81d117174f0fd2fb82ab9fd" Feb 23 13:29:47.863211 master-0 kubenswrapper[26474]: I0223 13:29:47.862754 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7274-account-create-update-2r52m" Feb 23 13:29:47.872051 master-0 kubenswrapper[26474]: I0223 13:29:47.872000 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqkcr" Feb 23 13:29:47.874934 master-0 kubenswrapper[26474]: I0223 13:29:47.871991 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqkcr" event={"ID":"199a7a01-a57b-473d-a293-4b88a1a946c8","Type":"ContainerDied","Data":"e639d11db670984068b1abfad5f74ded9ead6fe8484b749b981f5d04a68c6719"} Feb 23 13:29:47.874934 master-0 kubenswrapper[26474]: I0223 13:29:47.872568 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e639d11db670984068b1abfad5f74ded9ead6fe8484b749b981f5d04a68c6719" Feb 23 13:29:47.876259 master-0 kubenswrapper[26474]: I0223 13:29:47.876218 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-internal-api-0" event={"ID":"d053834d-7877-4283-b937-693f20d0c6a4","Type":"ContainerStarted","Data":"b060646a06dcc10567068fd5d73ed87514b9ba8a64691ae62d1bfb27cfe09f71"} Feb 23 13:29:47.882026 master-0 kubenswrapper[26474]: I0223 13:29:47.881974 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"4392daca-d648-4740-b6ef-62a95c306245","Type":"ContainerStarted","Data":"6feecf3d5be1d3c5001ef22b0920f77864576d34a4ae46ee38c90ada9f96832d"} Feb 23 13:29:47.882111 master-0 kubenswrapper[26474]: I0223 13:29:47.882037 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"4392daca-d648-4740-b6ef-62a95c306245","Type":"ContainerStarted","Data":"96c7a1c4bbf69eebff8f64bc11a83ee633fb6231be2b9a532258ac01da7fc0de"} Feb 23 13:29:47.892811 master-0 kubenswrapper[26474]: I0223 13:29:47.892722 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-mrlgq" event={"ID":"e9fcecfd-b548-47da-800d-24deb9834fa7","Type":"ContainerStarted","Data":"ba2072224417be79c18726ec8fe705f72a6f3069efdf8a43b2937ab23c77aa13"} Feb 23 13:29:47.896671 master-0 kubenswrapper[26474]: I0223 13:29:47.896608 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46be-account-create-update-v6qrd" event={"ID":"c2bb7e81-33b2-457d-98f9-0a9114ce13f4","Type":"ContainerDied","Data":"d675b69b48cc1546cf3ef593d85f01f0cb1a967e5d69fc185b6dc0951721c363"} Feb 23 13:29:47.896671 master-0 kubenswrapper[26474]: I0223 13:29:47.896664 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d675b69b48cc1546cf3ef593d85f01f0cb1a967e5d69fc185b6dc0951721c363" Feb 23 13:29:47.896836 master-0 kubenswrapper[26474]: I0223 13:29:47.896729 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46be-account-create-update-v6qrd" Feb 23 13:29:47.926469 master-0 kubenswrapper[26474]: I0223 13:29:47.926374 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4fec4-default-internal-api-0" podStartSLOduration=5.926272809 podStartE2EDuration="5.926272809s" podCreationTimestamp="2026-02-23 13:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:47.915987638 +0000 UTC m=+909.762495315" watchObservedRunningTime="2026-02-23 13:29:47.926272809 +0000 UTC m=+909.772780496" Feb 23 13:29:47.955219 master-0 kubenswrapper[26474]: I0223 13:29:47.955125 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-mrlgq" podStartSLOduration=9.662650589 podStartE2EDuration="13.955106722s" podCreationTimestamp="2026-02-23 13:29:34 +0000 UTC" firstStartedPulling="2026-02-23 13:29:42.842927593 +0000 UTC m=+904.689435270" lastFinishedPulling="2026-02-23 13:29:47.135383726 +0000 UTC m=+908.981891403" observedRunningTime="2026-02-23 13:29:47.942969286 +0000 UTC m=+909.789476963" watchObservedRunningTime="2026-02-23 13:29:47.955106722 +0000 UTC m=+909.801614399" Feb 23 13:29:48.912786 master-0 kubenswrapper[26474]: I0223 13:29:48.912710 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4fec4-default-external-api-0" event={"ID":"4392daca-d648-4740-b6ef-62a95c306245","Type":"ContainerStarted","Data":"5352f72b65aa2f8959a5843353a1d9a5b15cc29bc13fef3f2359a69774696176"} Feb 23 13:29:48.943904 master-0 kubenswrapper[26474]: I0223 13:29:48.943823 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4fec4-default-external-api-0" podStartSLOduration=5.943787907 podStartE2EDuration="5.943787907s" podCreationTimestamp="2026-02-23 13:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:48.93858164 +0000 UTC m=+910.785089347" watchObservedRunningTime="2026-02-23 13:29:48.943787907 +0000 UTC m=+910.790295584" Feb 23 13:29:49.924303 master-0 kubenswrapper[26474]: I0223 13:29:49.924150 26474 generic.go:334] "Generic (PLEG): container finished" podID="e9fcecfd-b548-47da-800d-24deb9834fa7" containerID="ba2072224417be79c18726ec8fe705f72a6f3069efdf8a43b2937ab23c77aa13" exitCode=0 Feb 23 13:29:49.924303 master-0 kubenswrapper[26474]: I0223 13:29:49.924247 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-mrlgq" event={"ID":"e9fcecfd-b548-47da-800d-24deb9834fa7","Type":"ContainerDied","Data":"ba2072224417be79c18726ec8fe705f72a6f3069efdf8a43b2937ab23c77aa13"} Feb 23 13:29:50.393436 master-0 kubenswrapper[26474]: I0223 13:29:50.393371 26474 scope.go:117] "RemoveContainer" containerID="751e7e146ef764b9d2f8df41cc825e31db1822a7b4926a1ff015ccaf86c67ce7" Feb 23 13:29:50.939582 master-0 kubenswrapper[26474]: I0223 13:29:50.939514 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" event={"ID":"7af5b404-f4af-4e67-b355-916c6240db47","Type":"ContainerStarted","Data":"b2394cdd0bb6048cd77618c592fbb82c856abbed019e8c8b3e685ce49deaab6c"} Feb 23 13:29:50.940149 master-0 kubenswrapper[26474]: I0223 13:29:50.939787 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:51.442355 master-0 kubenswrapper[26474]: I0223 13:29:51.442261 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:51.539189 master-0 kubenswrapper[26474]: I0223 13:29:51.539111 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539189 master-0 kubenswrapper[26474]: I0223 13:29:51.539199 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539471 master-0 kubenswrapper[26474]: I0223 13:29:51.539326 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539471 master-0 kubenswrapper[26474]: I0223 13:29:51.539410 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539471 master-0 kubenswrapper[26474]: I0223 13:29:51.539430 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz4rn\" (UniqueName: \"kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539577 master-0 kubenswrapper[26474]: I0223 13:29:51.539478 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539577 master-0 kubenswrapper[26474]: I0223 13:29:51.539575 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config\") pod \"e9fcecfd-b548-47da-800d-24deb9834fa7\" (UID: \"e9fcecfd-b548-47da-800d-24deb9834fa7\") " Feb 23 13:29:51.539768 master-0 kubenswrapper[26474]: I0223 13:29:51.539656 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:51.539813 master-0 kubenswrapper[26474]: I0223 13:29:51.539737 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:51.542128 master-0 kubenswrapper[26474]: I0223 13:29:51.542087 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.542128 master-0 kubenswrapper[26474]: I0223 13:29:51.542119 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e9fcecfd-b548-47da-800d-24deb9834fa7-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.543458 master-0 kubenswrapper[26474]: I0223 13:29:51.543404 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn" (OuterVolumeSpecName: "kube-api-access-xz4rn") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "kube-api-access-xz4rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:51.543535 master-0 kubenswrapper[26474]: I0223 13:29:51.543517 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:29:51.543859 master-0 kubenswrapper[26474]: I0223 13:29:51.543817 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts" (OuterVolumeSpecName: "scripts") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:51.574906 master-0 kubenswrapper[26474]: I0223 13:29:51.574821 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:51.576834 master-0 kubenswrapper[26474]: I0223 13:29:51.576762 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config" (OuterVolumeSpecName: "config") pod "e9fcecfd-b548-47da-800d-24deb9834fa7" (UID: "e9fcecfd-b548-47da-800d-24deb9834fa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:51.643894 master-0 kubenswrapper[26474]: I0223 13:29:51.643808 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.643894 master-0 kubenswrapper[26474]: I0223 13:29:51.643861 26474 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9fcecfd-b548-47da-800d-24deb9834fa7-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.643894 master-0 kubenswrapper[26474]: I0223 13:29:51.643878 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz4rn\" (UniqueName: \"kubernetes.io/projected/e9fcecfd-b548-47da-800d-24deb9834fa7-kube-api-access-xz4rn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.643894 master-0 kubenswrapper[26474]: I0223 13:29:51.643888 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.643894 master-0 kubenswrapper[26474]: I0223 13:29:51.643899 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9fcecfd-b548-47da-800d-24deb9834fa7-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:29:51.843122 master-0 kubenswrapper[26474]: I0223 13:29:51.843045 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d78v8"] Feb 23 13:29:51.843767 master-0 kubenswrapper[26474]: E0223 13:29:51.843741 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fcecfd-b548-47da-800d-24deb9834fa7" containerName="ironic-inspector-db-sync" Feb 23 13:29:51.843829 master-0 kubenswrapper[26474]: I0223 13:29:51.843770 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fcecfd-b548-47da-800d-24deb9834fa7" containerName="ironic-inspector-db-sync" Feb 23 13:29:51.843829 master-0 kubenswrapper[26474]: E0223 13:29:51.843810 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882d9e69-bb35-4914-a2ef-42fb05306b3a" containerName="mariadb-database-create" Feb 23 13:29:51.843829 master-0 kubenswrapper[26474]: I0223 13:29:51.843819 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="882d9e69-bb35-4914-a2ef-42fb05306b3a" containerName="mariadb-database-create" Feb 23 13:29:51.843921 master-0 kubenswrapper[26474]: E0223 13:29:51.843843 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3071f1bd-cdeb-433e-bd4b-a02190489f95" containerName="mariadb-account-create-update" Feb 23 13:29:51.843921 master-0 kubenswrapper[26474]: I0223 13:29:51.843853 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3071f1bd-cdeb-433e-bd4b-a02190489f95" containerName="mariadb-account-create-update" Feb 23 13:29:51.843921 master-0 kubenswrapper[26474]: E0223 13:29:51.843892 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199a7a01-a57b-473d-a293-4b88a1a946c8" containerName="mariadb-database-create" Feb 23 13:29:51.843921 master-0 kubenswrapper[26474]: I0223 13:29:51.843903 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="199a7a01-a57b-473d-a293-4b88a1a946c8" containerName="mariadb-database-create" Feb 23 13:29:51.843921 master-0 kubenswrapper[26474]: E0223 13:29:51.843916 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631041f-70ef-485e-9113-f33f737f91f7" containerName="mariadb-account-create-update" Feb 23 13:29:51.844061 master-0 kubenswrapper[26474]: I0223 13:29:51.843924 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631041f-70ef-485e-9113-f33f737f91f7" containerName="mariadb-account-create-update" Feb 23 13:29:51.844061 master-0 kubenswrapper[26474]: E0223 13:29:51.843960 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd82ed9-d70c-4c60-afb4-63db5a57aa79" containerName="mariadb-database-create" Feb 23 13:29:51.844061 master-0 kubenswrapper[26474]: I0223 13:29:51.843970 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd82ed9-d70c-4c60-afb4-63db5a57aa79" containerName="mariadb-database-create" Feb 23 13:29:51.844061 master-0 kubenswrapper[26474]: E0223 13:29:51.843985 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bb7e81-33b2-457d-98f9-0a9114ce13f4" containerName="mariadb-account-create-update" Feb 23 13:29:51.844061 master-0 kubenswrapper[26474]: I0223 13:29:51.843992 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bb7e81-33b2-457d-98f9-0a9114ce13f4" containerName="mariadb-account-create-update" Feb 23 13:29:51.844310 master-0 kubenswrapper[26474]: I0223 13:29:51.844288 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="882d9e69-bb35-4914-a2ef-42fb05306b3a" containerName="mariadb-database-create" Feb 23 13:29:51.844368 master-0 kubenswrapper[26474]: I0223 13:29:51.844330 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd82ed9-d70c-4c60-afb4-63db5a57aa79" containerName="mariadb-database-create" Feb 23 13:29:51.844368 master-0 kubenswrapper[26474]: I0223 13:29:51.844364 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631041f-70ef-485e-9113-f33f737f91f7" containerName="mariadb-account-create-update" Feb 23 13:29:51.844441 master-0 kubenswrapper[26474]: I0223 13:29:51.844385 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bb7e81-33b2-457d-98f9-0a9114ce13f4" containerName="mariadb-account-create-update" Feb 23 13:29:51.844441 master-0 kubenswrapper[26474]: I0223 13:29:51.844414 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="199a7a01-a57b-473d-a293-4b88a1a946c8" containerName="mariadb-database-create" Feb 23 13:29:51.844441 master-0 kubenswrapper[26474]: I0223 13:29:51.844438 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fcecfd-b548-47da-800d-24deb9834fa7" containerName="ironic-inspector-db-sync" Feb 23 13:29:51.844540 master-0 kubenswrapper[26474]: I0223 13:29:51.844469 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="3071f1bd-cdeb-433e-bd4b-a02190489f95" containerName="mariadb-account-create-update" Feb 23 13:29:51.845259 master-0 kubenswrapper[26474]: I0223 13:29:51.845236 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:51.849546 master-0 kubenswrapper[26474]: I0223 13:29:51.849472 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 13:29:51.851143 master-0 kubenswrapper[26474]: I0223 13:29:51.851113 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 13:29:51.855253 master-0 kubenswrapper[26474]: I0223 13:29:51.855184 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d78v8"] Feb 23 13:29:51.949217 master-0 kubenswrapper[26474]: I0223 13:29:51.949097 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:51.949882 master-0 kubenswrapper[26474]: I0223 13:29:51.949838 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:51.950003 master-0 kubenswrapper[26474]: I0223 13:29:51.949978 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:51.950254 master-0 kubenswrapper[26474]: I0223 13:29:51.950199 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6trs\" (UniqueName: \"kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:51.964215 master-0 kubenswrapper[26474]: I0223 13:29:51.964160 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-mrlgq" event={"ID":"e9fcecfd-b548-47da-800d-24deb9834fa7","Type":"ContainerDied","Data":"92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f"} Feb 23 13:29:51.964215 master-0 kubenswrapper[26474]: I0223 13:29:51.964215 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ab5d19f36f86bfedada7b6a5ea197a809b164a81f6400cdfb60c898334517f" Feb 23 13:29:51.964413 master-0 kubenswrapper[26474]: I0223 13:29:51.964250 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-mrlgq" Feb 23 13:29:52.052776 master-0 kubenswrapper[26474]: I0223 13:29:52.052703 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.053034 master-0 kubenswrapper[26474]: I0223 13:29:52.052927 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.053098 master-0 kubenswrapper[26474]: I0223 13:29:52.053061 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6trs\" (UniqueName: \"kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.053232 master-0 kubenswrapper[26474]: I0223 13:29:52.053163 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.056282 master-0 kubenswrapper[26474]: I0223 13:29:52.056234 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.058739 master-0 kubenswrapper[26474]: I0223 13:29:52.058678 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.065065 master-0 kubenswrapper[26474]: I0223 13:29:52.065003 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.071944 master-0 kubenswrapper[26474]: I0223 13:29:52.071911 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6trs\" (UniqueName: \"kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs\") pod \"nova-cell0-conductor-db-sync-d78v8\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.168915 master-0 kubenswrapper[26474]: I0223 13:29:52.168835 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:29:52.640715 master-0 kubenswrapper[26474]: I0223 13:29:52.640646 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d78v8"] Feb 23 13:29:52.981882 master-0 kubenswrapper[26474]: I0223 13:29:52.981705 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d78v8" event={"ID":"3c28125b-7561-4020-90ab-9dd7bbd740f3","Type":"ContainerStarted","Data":"80ae76d7b1e55b69ef76799063c5d1bb899122f8ae44d28b87259f738413f6c2"} Feb 23 13:29:54.620776 master-0 kubenswrapper[26474]: I0223 13:29:54.620085 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:54.620776 master-0 kubenswrapper[26474]: I0223 13:29:54.620149 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:54.639841 master-0 kubenswrapper[26474]: I0223 13:29:54.639778 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:29:54.641878 master-0 kubenswrapper[26474]: I0223 13:29:54.641842 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.653682 master-0 kubenswrapper[26474]: I0223 13:29:54.653604 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:29:54.659276 master-0 kubenswrapper[26474]: I0223 13:29:54.659229 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:54.684978 master-0 kubenswrapper[26474]: I0223 13:29:54.684907 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:54.841852 master-0 kubenswrapper[26474]: I0223 13:29:54.841728 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.842050 master-0 kubenswrapper[26474]: I0223 13:29:54.841900 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.842050 master-0 kubenswrapper[26474]: I0223 13:29:54.841927 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbpb\" (UniqueName: \"kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.842050 master-0 kubenswrapper[26474]: I0223 13:29:54.841958 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.842198 master-0 kubenswrapper[26474]: I0223 13:29:54.842081 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.842198 master-0 kubenswrapper[26474]: I0223 13:29:54.842103 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.926429 master-0 kubenswrapper[26474]: I0223 13:29:54.926373 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:29:54.932525 master-0 kubenswrapper[26474]: I0223 13:29:54.932468 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:29:54.935802 master-0 kubenswrapper[26474]: I0223 13:29:54.935765 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 23 13:29:54.935978 master-0 kubenswrapper[26474]: I0223 13:29:54.935946 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 23 13:29:54.936175 master-0 kubenswrapper[26474]: I0223 13:29:54.936153 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 23 13:29:54.938743 master-0 kubenswrapper[26474]: I0223 13:29:54.938683 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:29:54.952940 master-0 kubenswrapper[26474]: I0223 13:29:54.952871 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.953140 master-0 kubenswrapper[26474]: I0223 13:29:54.952959 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbpb\" (UniqueName: \"kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.953140 master-0 kubenswrapper[26474]: I0223 13:29:54.953017 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.953304 master-0 kubenswrapper[26474]: I0223 13:29:54.953271 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.953367 master-0 kubenswrapper[26474]: I0223 13:29:54.953323 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.953609 master-0 kubenswrapper[26474]: I0223 13:29:54.953443 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.954089 master-0 kubenswrapper[26474]: I0223 13:29:54.954045 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.954615 master-0 kubenswrapper[26474]: I0223 13:29:54.954578 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.954720 master-0 kubenswrapper[26474]: I0223 13:29:54.954688 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.955136 master-0 kubenswrapper[26474]: I0223 13:29:54.955095 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.955554 master-0 kubenswrapper[26474]: I0223 13:29:54.955519 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.974075 master-0 kubenswrapper[26474]: I0223 13:29:54.974018 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbpb\" (UniqueName: \"kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb\") pod \"dnsmasq-dns-66b4f9f77-rzm8t\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:54.990293 master-0 kubenswrapper[26474]: I0223 13:29:54.990213 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:55.019363 master-0 kubenswrapper[26474]: I0223 13:29:55.019292 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:55.019363 master-0 kubenswrapper[26474]: I0223 13:29:55.019369 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:55.043512 master-0 kubenswrapper[26474]: I0223 13:29:55.043449 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7d6b446974-djn5h" Feb 23 13:29:55.056061 master-0 kubenswrapper[26474]: I0223 13:29:55.055984 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056306 master-0 kubenswrapper[26474]: I0223 13:29:55.056120 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056306 master-0 kubenswrapper[26474]: I0223 13:29:55.056207 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056456 master-0 kubenswrapper[26474]: I0223 13:29:55.056426 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056562 master-0 kubenswrapper[26474]: I0223 13:29:55.056505 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056612 master-0 kubenswrapper[26474]: I0223 13:29:55.056576 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.056693 master-0 kubenswrapper[26474]: I0223 13:29:55.056661 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnzjx\" (UniqueName: \"kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.158989 master-0 kubenswrapper[26474]: I0223 13:29:55.158823 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.158989 master-0 kubenswrapper[26474]: I0223 13:29:55.158916 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.158989 master-0 kubenswrapper[26474]: I0223 13:29:55.158982 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnzjx\" (UniqueName: \"kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.159301 master-0 kubenswrapper[26474]: I0223 13:29:55.159018 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.159301 master-0 kubenswrapper[26474]: I0223 13:29:55.159083 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.159301 master-0 kubenswrapper[26474]: I0223 13:29:55.159121 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.159301 master-0 kubenswrapper[26474]: I0223 13:29:55.159235 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.163189 master-0 kubenswrapper[26474]: I0223 13:29:55.162111 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.163189 master-0 kubenswrapper[26474]: I0223 13:29:55.163008 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.165494 master-0 kubenswrapper[26474]: I0223 13:29:55.165454 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.166009 master-0 kubenswrapper[26474]: I0223 13:29:55.165981 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.167507 master-0 kubenswrapper[26474]: I0223 13:29:55.167467 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.168871 master-0 kubenswrapper[26474]: I0223 13:29:55.168813 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.196689 master-0 kubenswrapper[26474]: I0223 13:29:55.196423 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnzjx\" (UniqueName: \"kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx\") pod \"ironic-inspector-0\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " pod="openstack/ironic-inspector-0" Feb 23 13:29:55.274467 master-0 kubenswrapper[26474]: I0223 13:29:55.263628 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:29:55.351059 master-0 kubenswrapper[26474]: I0223 13:29:55.350971 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:55.351059 master-0 kubenswrapper[26474]: I0223 13:29:55.351054 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:55.433274 master-0 kubenswrapper[26474]: I0223 13:29:55.433097 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:55.441416 master-0 kubenswrapper[26474]: I0223 13:29:55.437496 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:55.553988 master-0 kubenswrapper[26474]: I0223 13:29:55.553873 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:29:55.557661 master-0 kubenswrapper[26474]: W0223 13:29:55.557513 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e84d30_9468_4cb0_b244_9cf568e9a485.slice/crio-47e7ebe5b99d7106d065f3fd48bfccf30d635b30aa3dae2394bede3dc1fcc37d WatchSource:0}: Error finding container 47e7ebe5b99d7106d065f3fd48bfccf30d635b30aa3dae2394bede3dc1fcc37d: Status 404 returned error can't find the container with id 47e7ebe5b99d7106d065f3fd48bfccf30d635b30aa3dae2394bede3dc1fcc37d Feb 23 13:29:55.922783 master-0 kubenswrapper[26474]: I0223 13:29:55.916309 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:29:56.047560 master-0 kubenswrapper[26474]: I0223 13:29:56.045933 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"adc16798-1bca-47cb-9e87-3d1ba72e0355","Type":"ContainerStarted","Data":"b9068bb1dcf38b704f4a711b31a8260400e9004a75007da6bedd60d48bad6df4"} Feb 23 13:29:56.053542 master-0 kubenswrapper[26474]: I0223 13:29:56.053125 26474 generic.go:334] "Generic (PLEG): container finished" podID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerID="ea3e571ff51f1964d8189478fe406f3c12e2fc58076fa151ea1acc4712e637a8" exitCode=0 Feb 23 13:29:56.054290 master-0 kubenswrapper[26474]: I0223 13:29:56.053779 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" event={"ID":"c6e84d30-9468-4cb0-b244-9cf568e9a485","Type":"ContainerDied","Data":"ea3e571ff51f1964d8189478fe406f3c12e2fc58076fa151ea1acc4712e637a8"} Feb 23 13:29:56.054290 master-0 kubenswrapper[26474]: I0223 13:29:56.053840 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" event={"ID":"c6e84d30-9468-4cb0-b244-9cf568e9a485","Type":"ContainerStarted","Data":"47e7ebe5b99d7106d065f3fd48bfccf30d635b30aa3dae2394bede3dc1fcc37d"} Feb 23 13:29:56.055522 master-0 kubenswrapper[26474]: I0223 13:29:56.055309 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:56.055522 master-0 kubenswrapper[26474]: I0223 13:29:56.055363 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:57.075867 master-0 kubenswrapper[26474]: I0223 13:29:57.075792 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" event={"ID":"c6e84d30-9468-4cb0-b244-9cf568e9a485","Type":"ContainerStarted","Data":"648d18d293547c2fc5329ff800dfa0fa713ef1452251259a7246c8274b40e7cf"} Feb 23 13:29:57.124957 master-0 kubenswrapper[26474]: I0223 13:29:57.124863 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" podStartSLOduration=3.124843978 podStartE2EDuration="3.124843978s" podCreationTimestamp="2026-02-23 13:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:57.105526237 +0000 UTC m=+918.952033944" watchObservedRunningTime="2026-02-23 13:29:57.124843978 +0000 UTC m=+918.971351655" Feb 23 13:29:57.170821 master-0 kubenswrapper[26474]: I0223 13:29:57.170762 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:57.171395 master-0 kubenswrapper[26474]: I0223 13:29:57.171330 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:29:57.244639 master-0 kubenswrapper[26474]: I0223 13:29:57.244528 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-internal-api-0" Feb 23 13:29:58.099257 master-0 kubenswrapper[26474]: I0223 13:29:58.099220 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:29:58.099943 master-0 kubenswrapper[26474]: I0223 13:29:58.099926 26474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:29:58.101458 master-0 kubenswrapper[26474]: I0223 13:29:58.101368 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:29:58.243369 master-0 kubenswrapper[26474]: I0223 13:29:58.242109 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:29:58.601018 master-0 kubenswrapper[26474]: I0223 13:29:58.600452 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:29:58.608014 master-0 kubenswrapper[26474]: I0223 13:29:58.607818 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4fec4-default-external-api-0" Feb 23 13:30:00.196785 master-0 kubenswrapper[26474]: I0223 13:30:00.195398 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9"] Feb 23 13:30:00.198959 master-0 kubenswrapper[26474]: I0223 13:30:00.198916 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.202292 master-0 kubenswrapper[26474]: I0223 13:30:00.201509 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:30:00.202292 master-0 kubenswrapper[26474]: I0223 13:30:00.202146 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-pnt5q" Feb 23 13:30:00.233649 master-0 kubenswrapper[26474]: I0223 13:30:00.232659 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9"] Feb 23 13:30:00.242559 master-0 kubenswrapper[26474]: I0223 13:30:00.239854 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps5j\" (UniqueName: \"kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.242559 master-0 kubenswrapper[26474]: I0223 13:30:00.240123 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.242559 master-0 kubenswrapper[26474]: I0223 13:30:00.240180 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.344036 master-0 kubenswrapper[26474]: I0223 13:30:00.343974 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.344269 master-0 kubenswrapper[26474]: I0223 13:30:00.344231 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.344477 master-0 kubenswrapper[26474]: I0223 13:30:00.344447 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bps5j\" (UniqueName: \"kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.345793 master-0 kubenswrapper[26474]: I0223 13:30:00.345743 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.360460 master-0 kubenswrapper[26474]: I0223 13:30:00.349648 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.370110 master-0 kubenswrapper[26474]: I0223 13:30:00.370062 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bps5j\" (UniqueName: \"kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j\") pod \"collect-profiles-29530890-l2nz9\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:00.539551 master-0 kubenswrapper[26474]: I0223 13:30:00.539444 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:03.747371 master-0 kubenswrapper[26474]: I0223 13:30:03.743128 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9"] Feb 23 13:30:04.476847 master-0 kubenswrapper[26474]: W0223 13:30:04.475870 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd5ae39_6381_4ad1_8dc2_e90aadcf0f6f.slice/crio-add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f WatchSource:0}: Error finding container add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f: Status 404 returned error can't find the container with id add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f Feb 23 13:30:04.992711 master-0 kubenswrapper[26474]: I0223 13:30:04.992563 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:30:05.202555 master-0 kubenswrapper[26474]: I0223 13:30:05.202471 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d78v8" event={"ID":"3c28125b-7561-4020-90ab-9dd7bbd740f3","Type":"ContainerStarted","Data":"ac0860c3c5619e204b320ba56e874cfa7e88c58b7e8d2cd2a4d2f7eb4700100f"} Feb 23 13:30:05.207005 master-0 kubenswrapper[26474]: I0223 13:30:05.206936 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" event={"ID":"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f","Type":"ContainerStarted","Data":"464c9577588b35ddbd8f91781683eb2606c2df711987e2da3a3d462980ea05ff"} Feb 23 13:30:05.207005 master-0 kubenswrapper[26474]: I0223 13:30:05.206999 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" event={"ID":"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f","Type":"ContainerStarted","Data":"add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f"} Feb 23 13:30:05.281625 master-0 kubenswrapper[26474]: I0223 13:30:05.281525 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-d78v8" podStartSLOduration=2.39496133 podStartE2EDuration="14.281499834s" podCreationTimestamp="2026-02-23 13:29:51 +0000 UTC" firstStartedPulling="2026-02-23 13:29:52.663678681 +0000 UTC m=+914.510186358" lastFinishedPulling="2026-02-23 13:30:04.550217185 +0000 UTC m=+926.396724862" observedRunningTime="2026-02-23 13:30:05.243617441 +0000 UTC m=+927.090125128" watchObservedRunningTime="2026-02-23 13:30:05.281499834 +0000 UTC m=+927.128007511" Feb 23 13:30:05.322413 master-0 kubenswrapper[26474]: I0223 13:30:05.318141 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" podStartSLOduration=5.318110868 podStartE2EDuration="5.318110868s" podCreationTimestamp="2026-02-23 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:05.261123958 +0000 UTC m=+927.107631655" watchObservedRunningTime="2026-02-23 13:30:05.318110868 +0000 UTC m=+927.164618545" Feb 23 13:30:05.356408 master-0 kubenswrapper[26474]: I0223 13:30:05.356309 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:30:05.356734 master-0 kubenswrapper[26474]: I0223 13:30:05.356692 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="dnsmasq-dns" containerID="cri-o://589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4" gracePeriod=10 Feb 23 13:30:05.742932 master-0 kubenswrapper[26474]: E0223 13:30:05.742856 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d57ee11_aa24_43e7_a712_03b2b12220d1.slice/crio-589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d57ee11_aa24_43e7_a712_03b2b12220d1.slice/crio-conmon-589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd5ae39_6381_4ad1_8dc2_e90aadcf0f6f.slice/crio-conmon-464c9577588b35ddbd8f91781683eb2606c2df711987e2da3a3d462980ea05ff.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:30:06.032583 master-0 kubenswrapper[26474]: I0223 13:30:06.032530 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:30:06.108669 master-0 kubenswrapper[26474]: I0223 13:30:06.108520 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvczd\" (UniqueName: \"kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.108914 master-0 kubenswrapper[26474]: I0223 13:30:06.108890 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.109444 master-0 kubenswrapper[26474]: I0223 13:30:06.109005 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.109444 master-0 kubenswrapper[26474]: I0223 13:30:06.109042 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.109568 master-0 kubenswrapper[26474]: I0223 13:30:06.109475 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.109568 master-0 kubenswrapper[26474]: I0223 13:30:06.109517 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc\") pod \"8d57ee11-aa24-43e7-a712-03b2b12220d1\" (UID: \"8d57ee11-aa24-43e7-a712-03b2b12220d1\") " Feb 23 13:30:06.116501 master-0 kubenswrapper[26474]: I0223 13:30:06.116387 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd" (OuterVolumeSpecName: "kube-api-access-gvczd") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "kube-api-access-gvczd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:06.186962 master-0 kubenswrapper[26474]: I0223 13:30:06.186859 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:06.187214 master-0 kubenswrapper[26474]: I0223 13:30:06.187157 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:06.187518 master-0 kubenswrapper[26474]: I0223 13:30:06.187478 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:06.194084 master-0 kubenswrapper[26474]: I0223 13:30:06.193997 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:06.196066 master-0 kubenswrapper[26474]: I0223 13:30:06.195990 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config" (OuterVolumeSpecName: "config") pod "8d57ee11-aa24-43e7-a712-03b2b12220d1" (UID: "8d57ee11-aa24-43e7-a712-03b2b12220d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:06.212847 master-0 kubenswrapper[26474]: I0223 13:30:06.212788 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.212847 master-0 kubenswrapper[26474]: I0223 13:30:06.212838 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.212847 master-0 kubenswrapper[26474]: I0223 13:30:06.212850 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvczd\" (UniqueName: \"kubernetes.io/projected/8d57ee11-aa24-43e7-a712-03b2b12220d1-kube-api-access-gvczd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.212847 master-0 kubenswrapper[26474]: I0223 13:30:06.212863 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.213161 master-0 kubenswrapper[26474]: I0223 13:30:06.212871 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.213161 master-0 kubenswrapper[26474]: I0223 13:30:06.212880 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d57ee11-aa24-43e7-a712-03b2b12220d1-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:06.223450 master-0 kubenswrapper[26474]: I0223 13:30:06.223410 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" Feb 23 13:30:06.223564 master-0 kubenswrapper[26474]: I0223 13:30:06.223444 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" event={"ID":"8d57ee11-aa24-43e7-a712-03b2b12220d1","Type":"ContainerDied","Data":"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4"} Feb 23 13:30:06.223564 master-0 kubenswrapper[26474]: I0223 13:30:06.223530 26474 scope.go:117] "RemoveContainer" containerID="589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4" Feb 23 13:30:06.223644 master-0 kubenswrapper[26474]: I0223 13:30:06.223411 26474 generic.go:334] "Generic (PLEG): container finished" podID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerID="589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4" exitCode=0 Feb 23 13:30:06.223754 master-0 kubenswrapper[26474]: I0223 13:30:06.223665 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7ff48b5-8ngpr" event={"ID":"8d57ee11-aa24-43e7-a712-03b2b12220d1","Type":"ContainerDied","Data":"03629aef69a27ad3433a9da5cf1beb4a3bd8bf1a60e6239b2ae23390485f9a58"} Feb 23 13:30:06.226867 master-0 kubenswrapper[26474]: I0223 13:30:06.226720 26474 generic.go:334] "Generic (PLEG): container finished" podID="3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" containerID="464c9577588b35ddbd8f91781683eb2606c2df711987e2da3a3d462980ea05ff" exitCode=0 Feb 23 13:30:06.226867 master-0 kubenswrapper[26474]: I0223 13:30:06.226846 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" event={"ID":"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f","Type":"ContainerDied","Data":"464c9577588b35ddbd8f91781683eb2606c2df711987e2da3a3d462980ea05ff"} Feb 23 13:30:06.331293 master-0 kubenswrapper[26474]: I0223 13:30:06.331239 26474 scope.go:117] "RemoveContainer" containerID="73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43" Feb 23 13:30:06.337858 master-0 kubenswrapper[26474]: I0223 13:30:06.335953 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:30:06.355534 master-0 kubenswrapper[26474]: I0223 13:30:06.353263 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7ff48b5-8ngpr"] Feb 23 13:30:06.389746 master-0 kubenswrapper[26474]: I0223 13:30:06.388935 26474 scope.go:117] "RemoveContainer" containerID="589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4" Feb 23 13:30:06.391247 master-0 kubenswrapper[26474]: E0223 13:30:06.390458 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4\": container with ID starting with 589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4 not found: ID does not exist" containerID="589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4" Feb 23 13:30:06.391247 master-0 kubenswrapper[26474]: I0223 13:30:06.390494 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4"} err="failed to get container status \"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4\": rpc error: code = NotFound desc = could not find container \"589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4\": container with ID starting with 589af4e13e70aa7696df1cfd8647388dc8f8f30065cec7fba427399285db38e4 not found: ID does not exist" Feb 23 13:30:06.391247 master-0 kubenswrapper[26474]: I0223 13:30:06.390517 26474 scope.go:117] "RemoveContainer" containerID="73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43" Feb 23 13:30:06.391247 master-0 kubenswrapper[26474]: E0223 13:30:06.391069 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43\": container with ID starting with 73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43 not found: ID does not exist" containerID="73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43" Feb 23 13:30:06.391247 master-0 kubenswrapper[26474]: I0223 13:30:06.391097 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43"} err="failed to get container status \"73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43\": rpc error: code = NotFound desc = could not find container \"73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43\": container with ID starting with 73ef6389ed5285d7079ef622b5d81804215f0caf92b0b339c7ca77ad38bd0e43 not found: ID does not exist" Feb 23 13:30:06.408554 master-0 kubenswrapper[26474]: I0223 13:30:06.408476 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" path="/var/lib/kubelet/pods/8d57ee11-aa24-43e7-a712-03b2b12220d1/volumes" Feb 23 13:30:07.750577 master-0 kubenswrapper[26474]: I0223 13:30:07.750478 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:07.859797 master-0 kubenswrapper[26474]: I0223 13:30:07.859737 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bps5j\" (UniqueName: \"kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j\") pod \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " Feb 23 13:30:07.860053 master-0 kubenswrapper[26474]: I0223 13:30:07.860010 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume\") pod \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " Feb 23 13:30:07.860171 master-0 kubenswrapper[26474]: I0223 13:30:07.860131 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume\") pod \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\" (UID: \"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f\") " Feb 23 13:30:07.861626 master-0 kubenswrapper[26474]: I0223 13:30:07.861594 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" (UID: "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:07.863408 master-0 kubenswrapper[26474]: I0223 13:30:07.863321 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" (UID: "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:07.864799 master-0 kubenswrapper[26474]: I0223 13:30:07.864714 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j" (OuterVolumeSpecName: "kube-api-access-bps5j") pod "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" (UID: "3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f"). InnerVolumeSpecName "kube-api-access-bps5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:07.964046 master-0 kubenswrapper[26474]: I0223 13:30:07.963978 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bps5j\" (UniqueName: \"kubernetes.io/projected/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-kube-api-access-bps5j\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:07.964046 master-0 kubenswrapper[26474]: I0223 13:30:07.964031 26474 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:07.964046 master-0 kubenswrapper[26474]: I0223 13:30:07.964044 26474 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:08.257258 master-0 kubenswrapper[26474]: I0223 13:30:08.257207 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" event={"ID":"3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f","Type":"ContainerDied","Data":"add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f"} Feb 23 13:30:08.257258 master-0 kubenswrapper[26474]: I0223 13:30:08.257256 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="add30052d34bcf406a67538e8680f0444c5eef91f89e5a1133110bba87073b9f" Feb 23 13:30:08.257519 master-0 kubenswrapper[26474]: I0223 13:30:08.257265 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-l2nz9" Feb 23 13:30:21.413533 master-0 kubenswrapper[26474]: I0223 13:30:21.413422 26474 generic.go:334] "Generic (PLEG): container finished" podID="3c28125b-7561-4020-90ab-9dd7bbd740f3" containerID="ac0860c3c5619e204b320ba56e874cfa7e88c58b7e8d2cd2a4d2f7eb4700100f" exitCode=0 Feb 23 13:30:21.413533 master-0 kubenswrapper[26474]: I0223 13:30:21.413503 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d78v8" event={"ID":"3c28125b-7561-4020-90ab-9dd7bbd740f3","Type":"ContainerDied","Data":"ac0860c3c5619e204b320ba56e874cfa7e88c58b7e8d2cd2a4d2f7eb4700100f"} Feb 23 13:30:22.989983 master-0 kubenswrapper[26474]: I0223 13:30:22.989465 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:30:23.006839 master-0 kubenswrapper[26474]: I0223 13:30:23.006759 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts\") pod \"3c28125b-7561-4020-90ab-9dd7bbd740f3\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " Feb 23 13:30:23.007265 master-0 kubenswrapper[26474]: I0223 13:30:23.006880 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6trs\" (UniqueName: \"kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs\") pod \"3c28125b-7561-4020-90ab-9dd7bbd740f3\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " Feb 23 13:30:23.007265 master-0 kubenswrapper[26474]: I0223 13:30:23.006958 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data\") pod \"3c28125b-7561-4020-90ab-9dd7bbd740f3\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " Feb 23 13:30:23.007265 master-0 kubenswrapper[26474]: I0223 13:30:23.006993 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle\") pod \"3c28125b-7561-4020-90ab-9dd7bbd740f3\" (UID: \"3c28125b-7561-4020-90ab-9dd7bbd740f3\") " Feb 23 13:30:23.012271 master-0 kubenswrapper[26474]: I0223 13:30:23.010532 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts" (OuterVolumeSpecName: "scripts") pod "3c28125b-7561-4020-90ab-9dd7bbd740f3" (UID: "3c28125b-7561-4020-90ab-9dd7bbd740f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:23.013471 master-0 kubenswrapper[26474]: I0223 13:30:23.013419 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs" (OuterVolumeSpecName: "kube-api-access-h6trs") pod "3c28125b-7561-4020-90ab-9dd7bbd740f3" (UID: "3c28125b-7561-4020-90ab-9dd7bbd740f3"). InnerVolumeSpecName "kube-api-access-h6trs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:23.067384 master-0 kubenswrapper[26474]: I0223 13:30:23.054622 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c28125b-7561-4020-90ab-9dd7bbd740f3" (UID: "3c28125b-7561-4020-90ab-9dd7bbd740f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:23.067384 master-0 kubenswrapper[26474]: I0223 13:30:23.066444 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data" (OuterVolumeSpecName: "config-data") pod "3c28125b-7561-4020-90ab-9dd7bbd740f3" (UID: "3c28125b-7561-4020-90ab-9dd7bbd740f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:23.111213 master-0 kubenswrapper[26474]: I0223 13:30:23.111128 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6trs\" (UniqueName: \"kubernetes.io/projected/3c28125b-7561-4020-90ab-9dd7bbd740f3-kube-api-access-h6trs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:23.111213 master-0 kubenswrapper[26474]: I0223 13:30:23.111185 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:23.111213 master-0 kubenswrapper[26474]: I0223 13:30:23.111198 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:23.111213 master-0 kubenswrapper[26474]: I0223 13:30:23.111208 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c28125b-7561-4020-90ab-9dd7bbd740f3-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:23.466589 master-0 kubenswrapper[26474]: I0223 13:30:23.466468 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d78v8" event={"ID":"3c28125b-7561-4020-90ab-9dd7bbd740f3","Type":"ContainerDied","Data":"80ae76d7b1e55b69ef76799063c5d1bb899122f8ae44d28b87259f738413f6c2"} Feb 23 13:30:23.466589 master-0 kubenswrapper[26474]: I0223 13:30:23.466557 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ae76d7b1e55b69ef76799063c5d1bb899122f8ae44d28b87259f738413f6c2" Feb 23 13:30:23.467086 master-0 kubenswrapper[26474]: I0223 13:30:23.466679 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d78v8" Feb 23 13:30:23.590143 master-0 kubenswrapper[26474]: I0223 13:30:23.590055 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:30:23.590772 master-0 kubenswrapper[26474]: E0223 13:30:23.590735 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="init" Feb 23 13:30:23.590772 master-0 kubenswrapper[26474]: I0223 13:30:23.590762 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="init" Feb 23 13:30:23.590772 master-0 kubenswrapper[26474]: E0223 13:30:23.590774 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="dnsmasq-dns" Feb 23 13:30:23.590897 master-0 kubenswrapper[26474]: I0223 13:30:23.590782 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="dnsmasq-dns" Feb 23 13:30:23.590897 master-0 kubenswrapper[26474]: E0223 13:30:23.590818 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c28125b-7561-4020-90ab-9dd7bbd740f3" containerName="nova-cell0-conductor-db-sync" Feb 23 13:30:23.590897 master-0 kubenswrapper[26474]: I0223 13:30:23.590825 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c28125b-7561-4020-90ab-9dd7bbd740f3" containerName="nova-cell0-conductor-db-sync" Feb 23 13:30:23.590897 master-0 kubenswrapper[26474]: E0223 13:30:23.590842 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" containerName="collect-profiles" Feb 23 13:30:23.590897 master-0 kubenswrapper[26474]: I0223 13:30:23.590850 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" containerName="collect-profiles" Feb 23 13:30:23.591136 master-0 kubenswrapper[26474]: I0223 13:30:23.591106 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd5ae39-6381-4ad1-8dc2-e90aadcf0f6f" containerName="collect-profiles" Feb 23 13:30:23.591179 master-0 kubenswrapper[26474]: I0223 13:30:23.591163 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d57ee11-aa24-43e7-a712-03b2b12220d1" containerName="dnsmasq-dns" Feb 23 13:30:23.591217 master-0 kubenswrapper[26474]: I0223 13:30:23.591182 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c28125b-7561-4020-90ab-9dd7bbd740f3" containerName="nova-cell0-conductor-db-sync" Feb 23 13:30:23.592106 master-0 kubenswrapper[26474]: I0223 13:30:23.592069 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.607904 master-0 kubenswrapper[26474]: I0223 13:30:23.607224 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:30:23.618499 master-0 kubenswrapper[26474]: I0223 13:30:23.618053 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 13:30:23.629760 master-0 kubenswrapper[26474]: I0223 13:30:23.628480 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52nv2\" (UniqueName: \"kubernetes.io/projected/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-kube-api-access-52nv2\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.630412 master-0 kubenswrapper[26474]: I0223 13:30:23.630378 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.630720 master-0 kubenswrapper[26474]: I0223 13:30:23.630698 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.733849 master-0 kubenswrapper[26474]: I0223 13:30:23.732726 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52nv2\" (UniqueName: \"kubernetes.io/projected/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-kube-api-access-52nv2\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.733849 master-0 kubenswrapper[26474]: I0223 13:30:23.732871 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.733849 master-0 kubenswrapper[26474]: I0223 13:30:23.732958 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.742694 master-0 kubenswrapper[26474]: I0223 13:30:23.736702 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.742694 master-0 kubenswrapper[26474]: I0223 13:30:23.737475 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.753302 master-0 kubenswrapper[26474]: I0223 13:30:23.753263 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52nv2\" (UniqueName: \"kubernetes.io/projected/71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff-kube-api-access-52nv2\") pod \"nova-cell0-conductor-0\" (UID: \"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:23.971474 master-0 kubenswrapper[26474]: I0223 13:30:23.971382 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:24.481794 master-0 kubenswrapper[26474]: W0223 13:30:24.481726 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e3b8dd_9b1b_43d6_bbd9_34481d6a43ff.slice/crio-27b5160bbb02902d862488f79b58501b860067ba451d8e33576740314b77e076 WatchSource:0}: Error finding container 27b5160bbb02902d862488f79b58501b860067ba451d8e33576740314b77e076: Status 404 returned error can't find the container with id 27b5160bbb02902d862488f79b58501b860067ba451d8e33576740314b77e076 Feb 23 13:30:24.483590 master-0 kubenswrapper[26474]: I0223 13:30:24.483479 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:30:25.497300 master-0 kubenswrapper[26474]: I0223 13:30:25.497217 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff","Type":"ContainerStarted","Data":"c0c5dc8d78cdede63aa628a5e5bd6ca78326439a4493cbe35688fb2fb1946f93"} Feb 23 13:30:25.498107 master-0 kubenswrapper[26474]: I0223 13:30:25.497614 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:25.498107 master-0 kubenswrapper[26474]: I0223 13:30:25.497816 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff","Type":"ContainerStarted","Data":"27b5160bbb02902d862488f79b58501b860067ba451d8e33576740314b77e076"} Feb 23 13:30:25.566639 master-0 kubenswrapper[26474]: I0223 13:30:25.562196 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.562155474 podStartE2EDuration="2.562155474s" podCreationTimestamp="2026-02-23 13:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:25.534933991 +0000 UTC m=+947.381441678" watchObservedRunningTime="2026-02-23 13:30:25.562155474 +0000 UTC m=+947.408663141" Feb 23 13:30:29.542994 master-0 kubenswrapper[26474]: I0223 13:30:29.542919 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"7981b3b87eaec99bd9e9f06fb28beee544643bf31da41770e0cb6b08af41642b"} Feb 23 13:30:29.546569 master-0 kubenswrapper[26474]: I0223 13:30:29.546521 26474 generic.go:334] "Generic (PLEG): container finished" podID="adc16798-1bca-47cb-9e87-3d1ba72e0355" containerID="b09f353b39a30f5d4b73d3b8e4b8395aa57b32d89aa4bae3ae46b0c924897f9c" exitCode=0 Feb 23 13:30:29.546569 master-0 kubenswrapper[26474]: I0223 13:30:29.546568 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"adc16798-1bca-47cb-9e87-3d1ba72e0355","Type":"ContainerDied","Data":"b09f353b39a30f5d4b73d3b8e4b8395aa57b32d89aa4bae3ae46b0c924897f9c"} Feb 23 13:30:30.221198 master-0 kubenswrapper[26474]: I0223 13:30:30.221116 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:30:30.327563 master-0 kubenswrapper[26474]: I0223 13:30:30.327498 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.327792 master-0 kubenswrapper[26474]: I0223 13:30:30.327637 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.327792 master-0 kubenswrapper[26474]: I0223 13:30:30.327673 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.327914 master-0 kubenswrapper[26474]: I0223 13:30:30.327893 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.328041 master-0 kubenswrapper[26474]: I0223 13:30:30.328017 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnzjx\" (UniqueName: \"kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.328108 master-0 kubenswrapper[26474]: I0223 13:30:30.328045 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.328108 master-0 kubenswrapper[26474]: I0223 13:30:30.328084 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"adc16798-1bca-47cb-9e87-3d1ba72e0355\" (UID: \"adc16798-1bca-47cb-9e87-3d1ba72e0355\") " Feb 23 13:30:30.328720 master-0 kubenswrapper[26474]: I0223 13:30:30.328683 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:30.328907 master-0 kubenswrapper[26474]: I0223 13:30:30.328831 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:30.329902 master-0 kubenswrapper[26474]: I0223 13:30:30.329856 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.329961 master-0 kubenswrapper[26474]: I0223 13:30:30.329910 26474 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/adc16798-1bca-47cb-9e87-3d1ba72e0355-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.331361 master-0 kubenswrapper[26474]: I0223 13:30:30.331279 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts" (OuterVolumeSpecName: "scripts") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:30.331923 master-0 kubenswrapper[26474]: I0223 13:30:30.331872 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx" (OuterVolumeSpecName: "kube-api-access-rnzjx") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "kube-api-access-rnzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:30.332477 master-0 kubenswrapper[26474]: I0223 13:30:30.332435 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config" (OuterVolumeSpecName: "config") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:30.334500 master-0 kubenswrapper[26474]: I0223 13:30:30.334479 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:30:30.372362 master-0 kubenswrapper[26474]: I0223 13:30:30.370976 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adc16798-1bca-47cb-9e87-3d1ba72e0355" (UID: "adc16798-1bca-47cb-9e87-3d1ba72e0355"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:30.432810 master-0 kubenswrapper[26474]: I0223 13:30:30.432627 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnzjx\" (UniqueName: \"kubernetes.io/projected/adc16798-1bca-47cb-9e87-3d1ba72e0355-kube-api-access-rnzjx\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.432810 master-0 kubenswrapper[26474]: I0223 13:30:30.432680 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.432810 master-0 kubenswrapper[26474]: I0223 13:30:30.432691 26474 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/adc16798-1bca-47cb-9e87-3d1ba72e0355-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.432810 master-0 kubenswrapper[26474]: I0223 13:30:30.432699 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.432810 master-0 kubenswrapper[26474]: I0223 13:30:30.432708 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/adc16798-1bca-47cb-9e87-3d1ba72e0355-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:30.561777 master-0 kubenswrapper[26474]: I0223 13:30:30.561722 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:30:30.562328 master-0 kubenswrapper[26474]: I0223 13:30:30.561799 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"adc16798-1bca-47cb-9e87-3d1ba72e0355","Type":"ContainerDied","Data":"b9068bb1dcf38b704f4a711b31a8260400e9004a75007da6bedd60d48bad6df4"} Feb 23 13:30:30.562328 master-0 kubenswrapper[26474]: I0223 13:30:30.561840 26474 scope.go:117] "RemoveContainer" containerID="b09f353b39a30f5d4b73d3b8e4b8395aa57b32d89aa4bae3ae46b0c924897f9c" Feb 23 13:30:30.670191 master-0 kubenswrapper[26474]: I0223 13:30:30.670148 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:30:30.687395 master-0 kubenswrapper[26474]: I0223 13:30:30.687253 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:30:30.701463 master-0 kubenswrapper[26474]: I0223 13:30:30.701364 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:30:30.702390 master-0 kubenswrapper[26474]: E0223 13:30:30.702314 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc16798-1bca-47cb-9e87-3d1ba72e0355" containerName="ironic-python-agent-init" Feb 23 13:30:30.702471 master-0 kubenswrapper[26474]: I0223 13:30:30.702391 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc16798-1bca-47cb-9e87-3d1ba72e0355" containerName="ironic-python-agent-init" Feb 23 13:30:30.702823 master-0 kubenswrapper[26474]: I0223 13:30:30.702796 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc16798-1bca-47cb-9e87-3d1ba72e0355" containerName="ironic-python-agent-init" Feb 23 13:30:30.709981 master-0 kubenswrapper[26474]: I0223 13:30:30.709916 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:30:30.712119 master-0 kubenswrapper[26474]: I0223 13:30:30.712080 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 23 13:30:30.712266 master-0 kubenswrapper[26474]: I0223 13:30:30.712093 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 23 13:30:30.712405 master-0 kubenswrapper[26474]: I0223 13:30:30.712118 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 23 13:30:30.712568 master-0 kubenswrapper[26474]: I0223 13:30:30.712499 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 23 13:30:30.712769 master-0 kubenswrapper[26474]: I0223 13:30:30.712754 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 23 13:30:30.715986 master-0 kubenswrapper[26474]: I0223 13:30:30.715904 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:30:30.845014 master-0 kubenswrapper[26474]: I0223 13:30:30.844906 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845059 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0894ec00-986c-4930-9ec0-6163e1e6f410-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845107 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845151 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-scripts\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845178 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845220 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-config\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845280 master-0 kubenswrapper[26474]: I0223 13:30:30.845249 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845506 master-0 kubenswrapper[26474]: I0223 13:30:30.845310 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvk2\" (UniqueName: \"kubernetes.io/projected/0894ec00-986c-4930-9ec0-6163e1e6f410-kube-api-access-8lvk2\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.845506 master-0 kubenswrapper[26474]: I0223 13:30:30.845364 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948211 master-0 kubenswrapper[26474]: I0223 13:30:30.948063 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948447 master-0 kubenswrapper[26474]: I0223 13:30:30.948201 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0894ec00-986c-4930-9ec0-6163e1e6f410-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948447 master-0 kubenswrapper[26474]: I0223 13:30:30.948268 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948447 master-0 kubenswrapper[26474]: I0223 13:30:30.948310 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-scripts\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948447 master-0 kubenswrapper[26474]: I0223 13:30:30.948367 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948447 master-0 kubenswrapper[26474]: I0223 13:30:30.948423 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-config\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948616 master-0 kubenswrapper[26474]: I0223 13:30:30.948458 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948616 master-0 kubenswrapper[26474]: I0223 13:30:30.948535 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvk2\" (UniqueName: \"kubernetes.io/projected/0894ec00-986c-4930-9ec0-6163e1e6f410-kube-api-access-8lvk2\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.948616 master-0 kubenswrapper[26474]: I0223 13:30:30.948590 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.950251 master-0 kubenswrapper[26474]: I0223 13:30:30.950125 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.950317 master-0 kubenswrapper[26474]: I0223 13:30:30.950238 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0894ec00-986c-4930-9ec0-6163e1e6f410-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.952722 master-0 kubenswrapper[26474]: I0223 13:30:30.952312 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0894ec00-986c-4930-9ec0-6163e1e6f410-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.952722 master-0 kubenswrapper[26474]: I0223 13:30:30.952655 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.952997 master-0 kubenswrapper[26474]: I0223 13:30:30.952949 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.953899 master-0 kubenswrapper[26474]: I0223 13:30:30.953864 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-config\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.955384 master-0 kubenswrapper[26474]: I0223 13:30:30.955332 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-scripts\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.957753 master-0 kubenswrapper[26474]: I0223 13:30:30.957703 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0894ec00-986c-4930-9ec0-6163e1e6f410-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:30.981808 master-0 kubenswrapper[26474]: I0223 13:30:30.981646 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvk2\" (UniqueName: \"kubernetes.io/projected/0894ec00-986c-4930-9ec0-6163e1e6f410-kube-api-access-8lvk2\") pod \"ironic-inspector-0\" (UID: \"0894ec00-986c-4930-9ec0-6163e1e6f410\") " pod="openstack/ironic-inspector-0" Feb 23 13:30:31.035152 master-0 kubenswrapper[26474]: I0223 13:30:31.035058 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 23 13:30:31.640501 master-0 kubenswrapper[26474]: I0223 13:30:31.640414 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 23 13:30:32.408625 master-0 kubenswrapper[26474]: I0223 13:30:32.408544 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc16798-1bca-47cb-9e87-3d1ba72e0355" path="/var/lib/kubelet/pods/adc16798-1bca-47cb-9e87-3d1ba72e0355/volumes" Feb 23 13:30:32.597870 master-0 kubenswrapper[26474]: I0223 13:30:32.597755 26474 generic.go:334] "Generic (PLEG): container finished" podID="0894ec00-986c-4930-9ec0-6163e1e6f410" containerID="51e4bbba86929c065ab67b30945d4251bc8fbd8594fe1b77f164ae7aedd3ee3f" exitCode=0 Feb 23 13:30:32.597870 master-0 kubenswrapper[26474]: I0223 13:30:32.597838 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerDied","Data":"51e4bbba86929c065ab67b30945d4251bc8fbd8594fe1b77f164ae7aedd3ee3f"} Feb 23 13:30:32.597870 master-0 kubenswrapper[26474]: I0223 13:30:32.597872 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"11287fdbf29fb5bebcb0f66d5c2c16598300e125ad1f20bfe2ea5f9aab377ee6"} Feb 23 13:30:34.015668 master-0 kubenswrapper[26474]: I0223 13:30:34.015546 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 13:30:34.628499 master-0 kubenswrapper[26474]: I0223 13:30:34.628404 26474 generic.go:334] "Generic (PLEG): container finished" podID="808fa98d-dace-4799-9059-a26510355d62" containerID="7981b3b87eaec99bd9e9f06fb28beee544643bf31da41770e0cb6b08af41642b" exitCode=0 Feb 23 13:30:34.628499 master-0 kubenswrapper[26474]: I0223 13:30:34.628475 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerDied","Data":"7981b3b87eaec99bd9e9f06fb28beee544643bf31da41770e0cb6b08af41642b"} Feb 23 13:30:35.056384 master-0 kubenswrapper[26474]: I0223 13:30:35.055732 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ng4xw"] Feb 23 13:30:35.077472 master-0 kubenswrapper[26474]: I0223 13:30:35.073577 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ng4xw"] Feb 23 13:30:35.077472 master-0 kubenswrapper[26474]: I0223 13:30:35.073876 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.086198 master-0 kubenswrapper[26474]: I0223 13:30:35.086121 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscd7\" (UniqueName: \"kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.086198 master-0 kubenswrapper[26474]: I0223 13:30:35.086226 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.086781 master-0 kubenswrapper[26474]: I0223 13:30:35.086275 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.086781 master-0 kubenswrapper[26474]: I0223 13:30:35.086297 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.087302 master-0 kubenswrapper[26474]: I0223 13:30:35.087193 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 13:30:35.087472 master-0 kubenswrapper[26474]: I0223 13:30:35.087384 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 13:30:35.208372 master-0 kubenswrapper[26474]: I0223 13:30:35.205474 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gscd7\" (UniqueName: \"kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.208372 master-0 kubenswrapper[26474]: I0223 13:30:35.205690 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.208372 master-0 kubenswrapper[26474]: I0223 13:30:35.205812 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.208372 master-0 kubenswrapper[26474]: I0223 13:30:35.205844 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.211940 master-0 kubenswrapper[26474]: I0223 13:30:35.211085 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.234054 master-0 kubenswrapper[26474]: I0223 13:30:35.233592 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.234514 master-0 kubenswrapper[26474]: I0223 13:30:35.233590 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.237170 master-0 kubenswrapper[26474]: I0223 13:30:35.237118 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 23 13:30:35.241958 master-0 kubenswrapper[26474]: I0223 13:30:35.239865 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.247152 master-0 kubenswrapper[26474]: I0223 13:30:35.246738 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gscd7\" (UniqueName: \"kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7\") pod \"nova-cell0-cell-mapping-ng4xw\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.247525 master-0 kubenswrapper[26474]: I0223 13:30:35.247490 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 23 13:30:35.273944 master-0 kubenswrapper[26474]: I0223 13:30:35.273394 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 23 13:30:35.324157 master-0 kubenswrapper[26474]: I0223 13:30:35.317671 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.324157 master-0 kubenswrapper[26474]: I0223 13:30:35.317785 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p269k\" (UniqueName: \"kubernetes.io/projected/4fe6487b-2c95-4029-b4da-8970da075f5d-kube-api-access-p269k\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.324157 master-0 kubenswrapper[26474]: I0223 13:30:35.318018 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.344503 master-0 kubenswrapper[26474]: I0223 13:30:35.342611 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:35.347588 master-0 kubenswrapper[26474]: I0223 13:30:35.347527 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:35.355929 master-0 kubenswrapper[26474]: I0223 13:30:35.355863 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:30:35.362350 master-0 kubenswrapper[26474]: I0223 13:30:35.362240 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:35.423682 master-0 kubenswrapper[26474]: I0223 13:30:35.423573 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.424070 master-0 kubenswrapper[26474]: I0223 13:30:35.423735 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.424070 master-0 kubenswrapper[26474]: I0223 13:30:35.423819 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.424070 master-0 kubenswrapper[26474]: I0223 13:30:35.423904 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.424070 master-0 kubenswrapper[26474]: I0223 13:30:35.423954 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqzn\" (UniqueName: \"kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.424291 master-0 kubenswrapper[26474]: I0223 13:30:35.424157 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.424291 master-0 kubenswrapper[26474]: I0223 13:30:35.424251 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p269k\" (UniqueName: \"kubernetes.io/projected/4fe6487b-2c95-4029-b4da-8970da075f5d-kube-api-access-p269k\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.428565 master-0 kubenswrapper[26474]: I0223 13:30:35.428508 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.432069 master-0 kubenswrapper[26474]: I0223 13:30:35.432032 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe6487b-2c95-4029-b4da-8970da075f5d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.459817 master-0 kubenswrapper[26474]: I0223 13:30:35.459722 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:35.540054 master-0 kubenswrapper[26474]: I0223 13:30:35.539419 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.540054 master-0 kubenswrapper[26474]: I0223 13:30:35.539496 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.540054 master-0 kubenswrapper[26474]: I0223 13:30:35.539519 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.540054 master-0 kubenswrapper[26474]: I0223 13:30:35.539536 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxqzn\" (UniqueName: \"kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.543531 master-0 kubenswrapper[26474]: I0223 13:30:35.542200 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p269k\" (UniqueName: \"kubernetes.io/projected/4fe6487b-2c95-4029-b4da-8970da075f5d-kube-api-access-p269k\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"4fe6487b-2c95-4029-b4da-8970da075f5d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.543531 master-0 kubenswrapper[26474]: I0223 13:30:35.542324 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.543758 master-0 kubenswrapper[26474]: I0223 13:30:35.543640 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.546091 master-0 kubenswrapper[26474]: I0223 13:30:35.546037 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.607112 master-0 kubenswrapper[26474]: I0223 13:30:35.606916 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxqzn\" (UniqueName: \"kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn\") pod \"nova-api-0\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " pod="openstack/nova-api-0" Feb 23 13:30:35.697373 master-0 kubenswrapper[26474]: I0223 13:30:35.683543 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:35.697373 master-0 kubenswrapper[26474]: I0223 13:30:35.690226 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:35.702416 master-0 kubenswrapper[26474]: I0223 13:30:35.701119 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:35.707873 master-0 kubenswrapper[26474]: I0223 13:30:35.703936 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:35.707873 master-0 kubenswrapper[26474]: I0223 13:30:35.704454 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:30:35.781473 master-0 kubenswrapper[26474]: I0223 13:30:35.780051 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:30:35.802397 master-0 kubenswrapper[26474]: I0223 13:30:35.801916 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:35.818956 master-0 kubenswrapper[26474]: I0223 13:30:35.818883 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:35.840005 master-0 kubenswrapper[26474]: I0223 13:30:35.838507 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccsdj\" (UniqueName: \"kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:35.925309 master-0 kubenswrapper[26474]: I0223 13:30:35.840739 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:35.925309 master-0 kubenswrapper[26474]: I0223 13:30:35.924477 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 13:30:35.937603 master-0 kubenswrapper[26474]: I0223 13:30:35.935121 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:35.974205 master-0 kubenswrapper[26474]: I0223 13:30:35.973413 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:30:35.991958 master-0 kubenswrapper[26474]: I0223 13:30:35.991216 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:35.994144 master-0 kubenswrapper[26474]: I0223 13:30:35.994060 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:35.997125 master-0 kubenswrapper[26474]: I0223 13:30:35.996858 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:30:36.019451 master-0 kubenswrapper[26474]: I0223 13:30:36.019210 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:36.026366 master-0 kubenswrapper[26474]: I0223 13:30:36.025428 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.026366 master-0 kubenswrapper[26474]: I0223 13:30:36.025545 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.026366 master-0 kubenswrapper[26474]: I0223 13:30:36.025993 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.026668 master-0 kubenswrapper[26474]: I0223 13:30:36.026420 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.036956 master-0 kubenswrapper[26474]: I0223 13:30:36.027096 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccsdj\" (UniqueName: \"kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.036956 master-0 kubenswrapper[26474]: I0223 13:30:36.027287 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzwps\" (UniqueName: \"kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.036956 master-0 kubenswrapper[26474]: I0223 13:30:36.031105 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.036956 master-0 kubenswrapper[26474]: I0223 13:30:36.036761 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.064705 master-0 kubenswrapper[26474]: I0223 13:30:36.063536 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccsdj\" (UniqueName: \"kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj\") pod \"nova-scheduler-0\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:36.078454 master-0 kubenswrapper[26474]: I0223 13:30:36.078058 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:30:36.089547 master-0 kubenswrapper[26474]: I0223 13:30:36.089377 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.098142 master-0 kubenswrapper[26474]: I0223 13:30:36.098059 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:30:36.125535 master-0 kubenswrapper[26474]: I0223 13:30:36.121450 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.130975 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131078 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131241 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131379 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkfg\" (UniqueName: \"kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131405 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131432 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.131480 master-0 kubenswrapper[26474]: I0223 13:30:36.131458 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzwps\" (UniqueName: \"kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.160503 master-0 kubenswrapper[26474]: I0223 13:30:36.160443 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzwps\" (UniqueName: \"kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.160786 master-0 kubenswrapper[26474]: I0223 13:30:36.160754 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.175109 master-0 kubenswrapper[26474]: I0223 13:30:36.174963 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.196772 master-0 kubenswrapper[26474]: I0223 13:30:36.196726 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6mnk7"] Feb 23 13:30:36.199482 master-0 kubenswrapper[26474]: I0223 13:30:36.199144 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.207041 master-0 kubenswrapper[26474]: I0223 13:30:36.207009 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 13:30:36.207678 master-0 kubenswrapper[26474]: I0223 13:30:36.207650 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 13:30:36.231751 master-0 kubenswrapper[26474]: I0223 13:30:36.231614 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6mnk7"] Feb 23 13:30:36.234322 master-0 kubenswrapper[26474]: I0223 13:30:36.234261 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.234322 master-0 kubenswrapper[26474]: I0223 13:30:36.234315 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.234459 master-0 kubenswrapper[26474]: I0223 13:30:36.234423 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gkfg\" (UniqueName: \"kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.234719 master-0 kubenswrapper[26474]: I0223 13:30:36.234697 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.234947 master-0 kubenswrapper[26474]: I0223 13:30:36.234925 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.235011 master-0 kubenswrapper[26474]: I0223 13:30:36.234991 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.235153 master-0 kubenswrapper[26474]: I0223 13:30:36.235040 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.235244 master-0 kubenswrapper[26474]: I0223 13:30:36.235223 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.237197 master-0 kubenswrapper[26474]: I0223 13:30:36.237068 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gpt\" (UniqueName: \"kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.238469 master-0 kubenswrapper[26474]: I0223 13:30:36.237285 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.244081 master-0 kubenswrapper[26474]: I0223 13:30:36.244046 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.244296 master-0 kubenswrapper[26474]: I0223 13:30:36.244225 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.248352 master-0 kubenswrapper[26474]: I0223 13:30:36.245875 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.260759 master-0 kubenswrapper[26474]: I0223 13:30:36.260714 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gkfg\" (UniqueName: \"kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg\") pod \"nova-metadata-0\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " pod="openstack/nova-metadata-0" Feb 23 13:30:36.272011 master-0 kubenswrapper[26474]: I0223 13:30:36.271959 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:36.333323 master-0 kubenswrapper[26474]: I0223 13:30:36.331688 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ng4xw"] Feb 23 13:30:36.343079 master-0 kubenswrapper[26474]: I0223 13:30:36.342177 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.352995 master-0 kubenswrapper[26474]: I0223 13:30:36.351588 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.369673 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.369754 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370061 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gpt\" (UniqueName: \"kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370208 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370356 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj855\" (UniqueName: \"kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370419 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370558 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.370590 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.371068 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.371395 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.371542 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.372837 master-0 kubenswrapper[26474]: I0223 13:30:36.372410 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.373265 master-0 kubenswrapper[26474]: I0223 13:30:36.373103 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.378698 master-0 kubenswrapper[26474]: I0223 13:30:36.378205 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.408292 master-0 kubenswrapper[26474]: I0223 13:30:36.408251 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gpt\" (UniqueName: \"kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt\") pod \"dnsmasq-dns-65f67f6fbf-px6zm\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.481750 master-0 kubenswrapper[26474]: I0223 13:30:36.481107 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.493683 master-0 kubenswrapper[26474]: I0223 13:30:36.481162 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.493683 master-0 kubenswrapper[26474]: I0223 13:30:36.489325 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.493683 master-0 kubenswrapper[26474]: I0223 13:30:36.492896 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.493683 master-0 kubenswrapper[26474]: I0223 13:30:36.493568 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.493908 master-0 kubenswrapper[26474]: I0223 13:30:36.493748 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj855\" (UniqueName: \"kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.499276 master-0 kubenswrapper[26474]: I0223 13:30:36.497952 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.512913 master-0 kubenswrapper[26474]: I0223 13:30:36.512714 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:36.523770 master-0 kubenswrapper[26474]: I0223 13:30:36.522793 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj855\" (UniqueName: \"kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855\") pod \"nova-cell1-conductor-db-sync-6mnk7\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.571079 master-0 kubenswrapper[26474]: I0223 13:30:36.570447 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:36.619070 master-0 kubenswrapper[26474]: I0223 13:30:36.619019 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 23 13:30:36.666660 master-0 kubenswrapper[26474]: W0223 13:30:36.665573 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe6487b_2c95_4029_b4da_8970da075f5d.slice/crio-c0f8a0937b501e053115fc2199437f05ffc49500206d0853597b151a107cfbeb WatchSource:0}: Error finding container c0f8a0937b501e053115fc2199437f05ffc49500206d0853597b151a107cfbeb: Status 404 returned error can't find the container with id c0f8a0937b501e053115fc2199437f05ffc49500206d0853597b151a107cfbeb Feb 23 13:30:36.760064 master-0 kubenswrapper[26474]: I0223 13:30:36.759879 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:36.828150 master-0 kubenswrapper[26474]: I0223 13:30:36.827797 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"4fe6487b-2c95-4029-b4da-8970da075f5d","Type":"ContainerStarted","Data":"c0f8a0937b501e053115fc2199437f05ffc49500206d0853597b151a107cfbeb"} Feb 23 13:30:36.833270 master-0 kubenswrapper[26474]: I0223 13:30:36.832187 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerStarted","Data":"2bbd948cd01dfc5b292b599d13640cfae2f72e586c909ea2da9f8ddc36638ec4"} Feb 23 13:30:36.848405 master-0 kubenswrapper[26474]: I0223 13:30:36.847743 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ng4xw" event={"ID":"044ee558-e330-4d4c-acbc-05bfdc4cb4e0","Type":"ContainerStarted","Data":"5f0f587dbd888f55f5f9817c8850f5b2164fd7f62f57efba5499c07b8c6bdb64"} Feb 23 13:30:36.848405 master-0 kubenswrapper[26474]: I0223 13:30:36.847857 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ng4xw" event={"ID":"044ee558-e330-4d4c-acbc-05bfdc4cb4e0","Type":"ContainerStarted","Data":"626465b0f0a09e224d79e8d7da07f0b327699c70500b252fafc1d86af049ac12"} Feb 23 13:30:36.921770 master-0 kubenswrapper[26474]: W0223 13:30:36.921655 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf68966f3_dae8_4859_b7f2_254223a1506d.slice/crio-56356c3c8f474820b9caee8f721090b3e8c21878b79c8aa53105f24171d2fb89 WatchSource:0}: Error finding container 56356c3c8f474820b9caee8f721090b3e8c21878b79c8aa53105f24171d2fb89: Status 404 returned error can't find the container with id 56356c3c8f474820b9caee8f721090b3e8c21878b79c8aa53105f24171d2fb89 Feb 23 13:30:36.942712 master-0 kubenswrapper[26474]: I0223 13:30:36.941819 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ng4xw" podStartSLOduration=2.94179593 podStartE2EDuration="2.94179593s" podCreationTimestamp="2026-02-23 13:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:36.872534921 +0000 UTC m=+958.719042618" watchObservedRunningTime="2026-02-23 13:30:36.94179593 +0000 UTC m=+958.788303607" Feb 23 13:30:36.963374 master-0 kubenswrapper[26474]: I0223 13:30:36.962423 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:37.132197 master-0 kubenswrapper[26474]: I0223 13:30:37.131448 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:30:37.136662 master-0 kubenswrapper[26474]: W0223 13:30:37.135620 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77f637a9_bbed_47c0_8a4f_13ebc89047f9.slice/crio-d34e5c38bf0727a6e2c436118005aecdb8060c43dfe255a88b3f53063d31ae28 WatchSource:0}: Error finding container d34e5c38bf0727a6e2c436118005aecdb8060c43dfe255a88b3f53063d31ae28: Status 404 returned error can't find the container with id d34e5c38bf0727a6e2c436118005aecdb8060c43dfe255a88b3f53063d31ae28 Feb 23 13:30:37.196031 master-0 kubenswrapper[26474]: I0223 13:30:37.195896 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:37.409765 master-0 kubenswrapper[26474]: I0223 13:30:37.407616 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6mnk7"] Feb 23 13:30:37.410908 master-0 kubenswrapper[26474]: W0223 13:30:37.410866 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9998b306_07a1_4bf0_a118_cff08aa88083.slice/crio-170ea072add31bac5f3de14668870c634b74ee26718fd798f34670a1b5d54992 WatchSource:0}: Error finding container 170ea072add31bac5f3de14668870c634b74ee26718fd798f34670a1b5d54992: Status 404 returned error can't find the container with id 170ea072add31bac5f3de14668870c634b74ee26718fd798f34670a1b5d54992 Feb 23 13:30:37.439099 master-0 kubenswrapper[26474]: I0223 13:30:37.439032 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:30:37.877483 master-0 kubenswrapper[26474]: I0223 13:30:37.877420 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" event={"ID":"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf","Type":"ContainerStarted","Data":"31f786b44889212bdfb2bdf37d44f949b98d02721d7c27dbca8ece38c66a29bf"} Feb 23 13:30:37.877483 master-0 kubenswrapper[26474]: I0223 13:30:37.877481 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" event={"ID":"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf","Type":"ContainerStarted","Data":"0c79aa4802825cc541efba2baeaad282b4948d8a13f531098c6ca93e23a565c4"} Feb 23 13:30:37.885447 master-0 kubenswrapper[26474]: I0223 13:30:37.885385 26474 generic.go:334] "Generic (PLEG): container finished" podID="9998b306-07a1-4bf0-a118-cff08aa88083" containerID="35146bd28b7f5beba352ded5d4c8e300690af88f6910b5cebdf8d091046d866f" exitCode=0 Feb 23 13:30:37.885604 master-0 kubenswrapper[26474]: I0223 13:30:37.885489 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" event={"ID":"9998b306-07a1-4bf0-a118-cff08aa88083","Type":"ContainerDied","Data":"35146bd28b7f5beba352ded5d4c8e300690af88f6910b5cebdf8d091046d866f"} Feb 23 13:30:37.885604 master-0 kubenswrapper[26474]: I0223 13:30:37.885526 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" event={"ID":"9998b306-07a1-4bf0-a118-cff08aa88083","Type":"ContainerStarted","Data":"170ea072add31bac5f3de14668870c634b74ee26718fd798f34670a1b5d54992"} Feb 23 13:30:37.889130 master-0 kubenswrapper[26474]: I0223 13:30:37.889090 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77f637a9-bbed-47c0-8a4f-13ebc89047f9","Type":"ContainerStarted","Data":"d34e5c38bf0727a6e2c436118005aecdb8060c43dfe255a88b3f53063d31ae28"} Feb 23 13:30:37.922495 master-0 kubenswrapper[26474]: I0223 13:30:37.908869 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" podStartSLOduration=1.908849487 podStartE2EDuration="1.908849487s" podCreationTimestamp="2026-02-23 13:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:37.894078728 +0000 UTC m=+959.740586405" watchObservedRunningTime="2026-02-23 13:30:37.908849487 +0000 UTC m=+959.755357164" Feb 23 13:30:37.922495 master-0 kubenswrapper[26474]: I0223 13:30:37.912379 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f68966f3-dae8-4859-b7f2-254223a1506d","Type":"ContainerStarted","Data":"56356c3c8f474820b9caee8f721090b3e8c21878b79c8aa53105f24171d2fb89"} Feb 23 13:30:37.980365 master-0 kubenswrapper[26474]: I0223 13:30:37.977130 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerStarted","Data":"66f8b3ceacf9c4124301660e4e6170a5fc167df771701a67b0c6bd82c70261a6"} Feb 23 13:30:38.992601 master-0 kubenswrapper[26474]: I0223 13:30:38.992192 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" event={"ID":"9998b306-07a1-4bf0-a118-cff08aa88083","Type":"ContainerStarted","Data":"8ffeeaff75e2e370fd0c57caed7e395c21bd3e7e22760a5e8dab8e083213b3f4"} Feb 23 13:30:39.019667 master-0 kubenswrapper[26474]: I0223 13:30:39.019583 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" podStartSLOduration=4.019560178 podStartE2EDuration="4.019560178s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:39.013603552 +0000 UTC m=+960.860111239" watchObservedRunningTime="2026-02-23 13:30:39.019560178 +0000 UTC m=+960.866067855" Feb 23 13:30:39.643198 master-0 kubenswrapper[26474]: I0223 13:30:39.643118 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:30:39.658959 master-0 kubenswrapper[26474]: I0223 13:30:39.658881 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:40.004729 master-0 kubenswrapper[26474]: I0223 13:30:40.004623 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:42.677790 master-0 kubenswrapper[26474]: I0223 13:30:42.677728 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 23 13:30:44.075125 master-0 kubenswrapper[26474]: I0223 13:30:44.074959 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f68966f3-dae8-4859-b7f2-254223a1506d","Type":"ContainerStarted","Data":"0a766aa92a6433dc5132a3adabcabcb9d421f3b79c7e8063bce58a4b2a5dd758"} Feb 23 13:30:44.079649 master-0 kubenswrapper[26474]: I0223 13:30:44.079262 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"51d5db4b94b2c7a423b364d2268763c5377e0279ac45b0486fa1a58996a9a279"} Feb 23 13:30:44.081955 master-0 kubenswrapper[26474]: I0223 13:30:44.081883 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"b8cf9441bc2ce72a94080373c119240c00b8e875c459abafcf2b0b07c78f0860"} Feb 23 13:30:44.085175 master-0 kubenswrapper[26474]: I0223 13:30:44.085103 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerStarted","Data":"ada2a696023ab410ac9497a964bbe967248e248160c7965c1e6e8f78ee80e8bb"} Feb 23 13:30:44.085175 master-0 kubenswrapper[26474]: I0223 13:30:44.085174 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerStarted","Data":"c165ab7fbd2af4af2be2b14e37412712c52d623b2a2be312ddb777b15eaf042f"} Feb 23 13:30:44.090366 master-0 kubenswrapper[26474]: I0223 13:30:44.090130 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerStarted","Data":"42a9d2820134e9ed8475389d1474adb5fd65e3b68b4b9165741bd1c2dad925dc"} Feb 23 13:30:44.090366 master-0 kubenswrapper[26474]: I0223 13:30:44.090206 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerStarted","Data":"4ea7d16cbc076f912a88373b7e88635e6072ea1c9fcebc2ed0c0504c339455c7"} Feb 23 13:30:44.090366 master-0 kubenswrapper[26474]: I0223 13:30:44.090153 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-log" containerID="cri-o://4ea7d16cbc076f912a88373b7e88635e6072ea1c9fcebc2ed0c0504c339455c7" gracePeriod=30 Feb 23 13:30:44.090366 master-0 kubenswrapper[26474]: I0223 13:30:44.090207 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-metadata" containerID="cri-o://42a9d2820134e9ed8475389d1474adb5fd65e3b68b4b9165741bd1c2dad925dc" gracePeriod=30 Feb 23 13:30:44.094382 master-0 kubenswrapper[26474]: I0223 13:30:44.094232 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77f637a9-bbed-47c0-8a4f-13ebc89047f9","Type":"ContainerStarted","Data":"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed"} Feb 23 13:30:44.094382 master-0 kubenswrapper[26474]: I0223 13:30:44.094318 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed" gracePeriod=30 Feb 23 13:30:44.353439 master-0 kubenswrapper[26474]: I0223 13:30:44.353308 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.663087887 podStartE2EDuration="9.353255068s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="2026-02-23 13:30:36.925046832 +0000 UTC m=+958.771554509" lastFinishedPulling="2026-02-23 13:30:42.615214013 +0000 UTC m=+964.461721690" observedRunningTime="2026-02-23 13:30:44.31397543 +0000 UTC m=+966.160483107" watchObservedRunningTime="2026-02-23 13:30:44.353255068 +0000 UTC m=+966.199762755" Feb 23 13:30:44.373676 master-0 kubenswrapper[26474]: I0223 13:30:44.373583 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.973251069 podStartE2EDuration="9.373559944s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="2026-02-23 13:30:37.214454297 +0000 UTC m=+959.060961974" lastFinishedPulling="2026-02-23 13:30:42.614763172 +0000 UTC m=+964.461270849" observedRunningTime="2026-02-23 13:30:44.340165399 +0000 UTC m=+966.186673076" watchObservedRunningTime="2026-02-23 13:30:44.373559944 +0000 UTC m=+966.220067621" Feb 23 13:30:44.443701 master-0 kubenswrapper[26474]: I0223 13:30:44.443595 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.97160648 podStartE2EDuration="9.443573581s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="2026-02-23 13:30:37.143711993 +0000 UTC m=+958.990219670" lastFinishedPulling="2026-02-23 13:30:42.615679094 +0000 UTC m=+964.462186771" observedRunningTime="2026-02-23 13:30:44.437646865 +0000 UTC m=+966.284154552" watchObservedRunningTime="2026-02-23 13:30:44.443573581 +0000 UTC m=+966.290081278" Feb 23 13:30:44.473267 master-0 kubenswrapper[26474]: I0223 13:30:44.473082 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.65296148 podStartE2EDuration="9.473053159s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="2026-02-23 13:30:36.795505693 +0000 UTC m=+958.642013370" lastFinishedPulling="2026-02-23 13:30:42.615597382 +0000 UTC m=+964.462105049" observedRunningTime="2026-02-23 13:30:44.457684204 +0000 UTC m=+966.304191891" watchObservedRunningTime="2026-02-23 13:30:44.473053159 +0000 UTC m=+966.319560826" Feb 23 13:30:45.109785 master-0 kubenswrapper[26474]: I0223 13:30:45.109708 26474 generic.go:334] "Generic (PLEG): container finished" podID="044ee558-e330-4d4c-acbc-05bfdc4cb4e0" containerID="5f0f587dbd888f55f5f9817c8850f5b2164fd7f62f57efba5499c07b8c6bdb64" exitCode=0 Feb 23 13:30:45.110526 master-0 kubenswrapper[26474]: I0223 13:30:45.109812 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ng4xw" event={"ID":"044ee558-e330-4d4c-acbc-05bfdc4cb4e0","Type":"ContainerDied","Data":"5f0f587dbd888f55f5f9817c8850f5b2164fd7f62f57efba5499c07b8c6bdb64"} Feb 23 13:30:45.112533 master-0 kubenswrapper[26474]: I0223 13:30:45.112495 26474 generic.go:334] "Generic (PLEG): container finished" podID="0894ec00-986c-4930-9ec0-6163e1e6f410" containerID="b8cf9441bc2ce72a94080373c119240c00b8e875c459abafcf2b0b07c78f0860" exitCode=0 Feb 23 13:30:45.112654 master-0 kubenswrapper[26474]: I0223 13:30:45.112586 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerDied","Data":"b8cf9441bc2ce72a94080373c119240c00b8e875c459abafcf2b0b07c78f0860"} Feb 23 13:30:45.117015 master-0 kubenswrapper[26474]: I0223 13:30:45.116939 26474 generic.go:334] "Generic (PLEG): container finished" podID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerID="42a9d2820134e9ed8475389d1474adb5fd65e3b68b4b9165741bd1c2dad925dc" exitCode=0 Feb 23 13:30:45.117015 master-0 kubenswrapper[26474]: I0223 13:30:45.117008 26474 generic.go:334] "Generic (PLEG): container finished" podID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerID="4ea7d16cbc076f912a88373b7e88635e6072ea1c9fcebc2ed0c0504c339455c7" exitCode=143 Feb 23 13:30:45.117171 master-0 kubenswrapper[26474]: I0223 13:30:45.116981 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerDied","Data":"42a9d2820134e9ed8475389d1474adb5fd65e3b68b4b9165741bd1c2dad925dc"} Feb 23 13:30:45.117171 master-0 kubenswrapper[26474]: I0223 13:30:45.117109 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerDied","Data":"4ea7d16cbc076f912a88373b7e88635e6072ea1c9fcebc2ed0c0504c339455c7"} Feb 23 13:30:45.705717 master-0 kubenswrapper[26474]: I0223 13:30:45.705649 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:30:45.705717 master-0 kubenswrapper[26474]: I0223 13:30:45.705715 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:30:46.122506 master-0 kubenswrapper[26474]: I0223 13:30:46.122435 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:30:46.122506 master-0 kubenswrapper[26474]: I0223 13:30:46.122507 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:30:46.162080 master-0 kubenswrapper[26474]: I0223 13:30:46.161995 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:30:46.274125 master-0 kubenswrapper[26474]: I0223 13:30:46.274026 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:30:46.372417 master-0 kubenswrapper[26474]: I0223 13:30:46.372249 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:30:46.372417 master-0 kubenswrapper[26474]: I0223 13:30:46.372395 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:30:46.516496 master-0 kubenswrapper[26474]: I0223 13:30:46.515623 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:30:46.788020 master-0 kubenswrapper[26474]: I0223 13:30:46.787554 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:30:46.788020 master-0 kubenswrapper[26474]: I0223 13:30:46.787623 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:30:47.199219 master-0 kubenswrapper[26474]: I0223 13:30:47.198485 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:30:47.976466 master-0 kubenswrapper[26474]: I0223 13:30:47.976134 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:30:47.976755 master-0 kubenswrapper[26474]: I0223 13:30:47.976496 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="dnsmasq-dns" containerID="cri-o://648d18d293547c2fc5329ff800dfa0fa713ef1452251259a7246c8274b40e7cf" gracePeriod=10 Feb 23 13:30:48.185379 master-0 kubenswrapper[26474]: I0223 13:30:48.185294 26474 generic.go:334] "Generic (PLEG): container finished" podID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerID="648d18d293547c2fc5329ff800dfa0fa713ef1452251259a7246c8274b40e7cf" exitCode=0 Feb 23 13:30:48.185630 master-0 kubenswrapper[26474]: I0223 13:30:48.185383 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" event={"ID":"c6e84d30-9468-4cb0-b244-9cf568e9a485","Type":"ContainerDied","Data":"648d18d293547c2fc5329ff800dfa0fa713ef1452251259a7246c8274b40e7cf"} Feb 23 13:30:49.992109 master-0 kubenswrapper[26474]: I0223 13:30:49.992026 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.247:5353: connect: connection refused" Feb 23 13:30:50.226459 master-0 kubenswrapper[26474]: I0223 13:30:50.226358 26474 generic.go:334] "Generic (PLEG): container finished" podID="5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" containerID="31f786b44889212bdfb2bdf37d44f949b98d02721d7c27dbca8ece38c66a29bf" exitCode=0 Feb 23 13:30:50.226459 master-0 kubenswrapper[26474]: I0223 13:30:50.226378 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" event={"ID":"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf","Type":"ContainerDied","Data":"31f786b44889212bdfb2bdf37d44f949b98d02721d7c27dbca8ece38c66a29bf"} Feb 23 13:30:51.265509 master-0 kubenswrapper[26474]: I0223 13:30:51.265435 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ng4xw" event={"ID":"044ee558-e330-4d4c-acbc-05bfdc4cb4e0","Type":"ContainerDied","Data":"626465b0f0a09e224d79e8d7da07f0b327699c70500b252fafc1d86af049ac12"} Feb 23 13:30:51.266147 master-0 kubenswrapper[26474]: I0223 13:30:51.265512 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626465b0f0a09e224d79e8d7da07f0b327699c70500b252fafc1d86af049ac12" Feb 23 13:30:51.270307 master-0 kubenswrapper[26474]: I0223 13:30:51.270239 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600","Type":"ContainerDied","Data":"66f8b3ceacf9c4124301660e4e6170a5fc167df771701a67b0c6bd82c70261a6"} Feb 23 13:30:51.270446 master-0 kubenswrapper[26474]: I0223 13:30:51.270315 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f8b3ceacf9c4124301660e4e6170a5fc167df771701a67b0c6bd82c70261a6" Feb 23 13:30:51.343901 master-0 kubenswrapper[26474]: I0223 13:30:51.343839 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:51.359459 master-0 kubenswrapper[26474]: I0223 13:30:51.359281 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:51.515476 master-0 kubenswrapper[26474]: I0223 13:30:51.515360 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle\") pod \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " Feb 23 13:30:51.515476 master-0 kubenswrapper[26474]: I0223 13:30:51.515483 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data\") pod \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515567 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts\") pod \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515588 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gkfg\" (UniqueName: \"kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg\") pod \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515628 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data\") pod \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515660 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gscd7\" (UniqueName: \"kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7\") pod \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515677 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs\") pod \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\" (UID: \"ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600\") " Feb 23 13:30:51.515897 master-0 kubenswrapper[26474]: I0223 13:30:51.515823 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle\") pod \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\" (UID: \"044ee558-e330-4d4c-acbc-05bfdc4cb4e0\") " Feb 23 13:30:51.523442 master-0 kubenswrapper[26474]: I0223 13:30:51.519995 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs" (OuterVolumeSpecName: "logs") pod "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" (UID: "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:51.523442 master-0 kubenswrapper[26474]: I0223 13:30:51.522526 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg" (OuterVolumeSpecName: "kube-api-access-7gkfg") pod "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" (UID: "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600"). InnerVolumeSpecName "kube-api-access-7gkfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:51.528053 master-0 kubenswrapper[26474]: I0223 13:30:51.527654 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts" (OuterVolumeSpecName: "scripts") pod "044ee558-e330-4d4c-acbc-05bfdc4cb4e0" (UID: "044ee558-e330-4d4c-acbc-05bfdc4cb4e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.528053 master-0 kubenswrapper[26474]: I0223 13:30:51.527874 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7" (OuterVolumeSpecName: "kube-api-access-gscd7") pod "044ee558-e330-4d4c-acbc-05bfdc4cb4e0" (UID: "044ee558-e330-4d4c-acbc-05bfdc4cb4e0"). InnerVolumeSpecName "kube-api-access-gscd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:51.576167 master-0 kubenswrapper[26474]: I0223 13:30:51.575770 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" (UID: "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.604012 master-0 kubenswrapper[26474]: I0223 13:30:51.603856 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data" (OuterVolumeSpecName: "config-data") pod "044ee558-e330-4d4c-acbc-05bfdc4cb4e0" (UID: "044ee558-e330-4d4c-acbc-05bfdc4cb4e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.604012 master-0 kubenswrapper[26474]: I0223 13:30:51.603882 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data" (OuterVolumeSpecName: "config-data") pod "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" (UID: "ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.617625 master-0 kubenswrapper[26474]: I0223 13:30:51.617563 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "044ee558-e330-4d4c-acbc-05bfdc4cb4e0" (UID: "044ee558-e330-4d4c-acbc-05bfdc4cb4e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.619147 master-0 kubenswrapper[26474]: I0223 13:30:51.619074 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gscd7\" (UniqueName: \"kubernetes.io/projected/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-kube-api-access-gscd7\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619147 master-0 kubenswrapper[26474]: I0223 13:30:51.619130 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619147 master-0 kubenswrapper[26474]: I0223 13:30:51.619143 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619147 master-0 kubenswrapper[26474]: I0223 13:30:51.619153 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619376 master-0 kubenswrapper[26474]: I0223 13:30:51.619164 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619376 master-0 kubenswrapper[26474]: I0223 13:30:51.619175 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044ee558-e330-4d4c-acbc-05bfdc4cb4e0-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619376 master-0 kubenswrapper[26474]: I0223 13:30:51.619183 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gkfg\" (UniqueName: \"kubernetes.io/projected/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-kube-api-access-7gkfg\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.619376 master-0 kubenswrapper[26474]: I0223 13:30:51.619192 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.729437 master-0 kubenswrapper[26474]: I0223 13:30:51.728858 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:30:51.758353 master-0 kubenswrapper[26474]: I0223 13:30:51.756931 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:51.824160 master-0 kubenswrapper[26474]: I0223 13:30:51.824013 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.824410 master-0 kubenswrapper[26474]: I0223 13:30:51.824208 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.824861 master-0 kubenswrapper[26474]: I0223 13:30:51.824810 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.824950 master-0 kubenswrapper[26474]: I0223 13:30:51.824908 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.825001 master-0 kubenswrapper[26474]: I0223 13:30:51.824977 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbpb\" (UniqueName: \"kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.825239 master-0 kubenswrapper[26474]: I0223 13:30:51.825179 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb\") pod \"c6e84d30-9468-4cb0-b244-9cf568e9a485\" (UID: \"c6e84d30-9468-4cb0-b244-9cf568e9a485\") " Feb 23 13:30:51.830990 master-0 kubenswrapper[26474]: I0223 13:30:51.830877 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb" (OuterVolumeSpecName: "kube-api-access-vkbpb") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "kube-api-access-vkbpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:51.889560 master-0 kubenswrapper[26474]: I0223 13:30:51.889406 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:51.891370 master-0 kubenswrapper[26474]: I0223 13:30:51.891295 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:51.897126 master-0 kubenswrapper[26474]: I0223 13:30:51.897016 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:51.901801 master-0 kubenswrapper[26474]: I0223 13:30:51.901741 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:51.902085 master-0 kubenswrapper[26474]: I0223 13:30:51.902015 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config" (OuterVolumeSpecName: "config") pod "c6e84d30-9468-4cb0-b244-9cf568e9a485" (UID: "c6e84d30-9468-4cb0-b244-9cf568e9a485"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:51.928885 master-0 kubenswrapper[26474]: I0223 13:30:51.928801 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle\") pod \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " Feb 23 13:30:51.929212 master-0 kubenswrapper[26474]: I0223 13:30:51.929172 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data\") pod \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " Feb 23 13:30:51.929435 master-0 kubenswrapper[26474]: I0223 13:30:51.929295 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj855\" (UniqueName: \"kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855\") pod \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " Feb 23 13:30:51.929435 master-0 kubenswrapper[26474]: I0223 13:30:51.929417 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts\") pod \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\" (UID: \"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf\") " Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930279 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930316 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930332 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930365 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930377 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbpb\" (UniqueName: \"kubernetes.io/projected/c6e84d30-9468-4cb0-b244-9cf568e9a485-kube-api-access-vkbpb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.930401 master-0 kubenswrapper[26474]: I0223 13:30:51.930388 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e84d30-9468-4cb0-b244-9cf568e9a485-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:51.932129 master-0 kubenswrapper[26474]: I0223 13:30:51.932079 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855" (OuterVolumeSpecName: "kube-api-access-xj855") pod "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" (UID: "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf"). InnerVolumeSpecName "kube-api-access-xj855". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:51.932912 master-0 kubenswrapper[26474]: I0223 13:30:51.932870 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts" (OuterVolumeSpecName: "scripts") pod "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" (UID: "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.958095 master-0 kubenswrapper[26474]: I0223 13:30:51.958007 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" (UID: "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:51.962506 master-0 kubenswrapper[26474]: I0223 13:30:51.962442 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data" (OuterVolumeSpecName: "config-data") pod "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" (UID: "5f1f58b0-74b0-4553-9667-9fe3a18fc4bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:52.033221 master-0 kubenswrapper[26474]: I0223 13:30:52.033156 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:52.033221 master-0 kubenswrapper[26474]: I0223 13:30:52.033211 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj855\" (UniqueName: \"kubernetes.io/projected/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-kube-api-access-xj855\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:52.033221 master-0 kubenswrapper[26474]: I0223 13:30:52.033224 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:52.033221 master-0 kubenswrapper[26474]: I0223 13:30:52.033233 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1f58b0-74b0-4553-9667-9fe3a18fc4bf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:52.289279 master-0 kubenswrapper[26474]: I0223 13:30:52.289212 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" event={"ID":"5f1f58b0-74b0-4553-9667-9fe3a18fc4bf","Type":"ContainerDied","Data":"0c79aa4802825cc541efba2baeaad282b4948d8a13f531098c6ca93e23a565c4"} Feb 23 13:30:52.289279 master-0 kubenswrapper[26474]: I0223 13:30:52.289282 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c79aa4802825cc541efba2baeaad282b4948d8a13f531098c6ca93e23a565c4" Feb 23 13:30:52.289857 master-0 kubenswrapper[26474]: I0223 13:30:52.289372 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6mnk7" Feb 23 13:30:52.301923 master-0 kubenswrapper[26474]: I0223 13:30:52.301584 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" event={"ID":"c6e84d30-9468-4cb0-b244-9cf568e9a485","Type":"ContainerDied","Data":"47e7ebe5b99d7106d065f3fd48bfccf30d635b30aa3dae2394bede3dc1fcc37d"} Feb 23 13:30:52.301923 master-0 kubenswrapper[26474]: I0223 13:30:52.301682 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b4f9f77-rzm8t" Feb 23 13:30:52.301923 master-0 kubenswrapper[26474]: I0223 13:30:52.301740 26474 scope.go:117] "RemoveContainer" containerID="648d18d293547c2fc5329ff800dfa0fa713ef1452251259a7246c8274b40e7cf" Feb 23 13:30:52.301923 master-0 kubenswrapper[26474]: I0223 13:30:52.301757 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:52.303952 master-0 kubenswrapper[26474]: I0223 13:30:52.302142 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ng4xw" Feb 23 13:30:52.343836 master-0 kubenswrapper[26474]: I0223 13:30:52.340153 26474 scope.go:117] "RemoveContainer" containerID="ea3e571ff51f1964d8189478fe406f3c12e2fc58076fa151ea1acc4712e637a8" Feb 23 13:30:53.330743 master-0 kubenswrapper[26474]: I0223 13:30:53.330656 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"0c2b75b018967f36e462de21bea78ccc3357db0526428b6d1d2643ddb126f101"} Feb 23 13:30:53.982373 master-0 kubenswrapper[26474]: I0223 13:30:53.973952 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:54.005360 master-0 kubenswrapper[26474]: I0223 13:30:54.004291 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:54.035455 master-0 kubenswrapper[26474]: I0223 13:30:54.028110 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.049564 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050083 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="dnsmasq-dns" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050118 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="dnsmasq-dns" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050132 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="init" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050138 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="init" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050185 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-metadata" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050191 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-metadata" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050217 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044ee558-e330-4d4c-acbc-05bfdc4cb4e0" containerName="nova-manage" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050223 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="044ee558-e330-4d4c-acbc-05bfdc4cb4e0" containerName="nova-manage" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050237 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-log" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050243 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-log" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: E0223 13:30:54.050256 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" containerName="nova-cell1-conductor-db-sync" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050262 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" containerName="nova-cell1-conductor-db-sync" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050555 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1f58b0-74b0-4553-9667-9fe3a18fc4bf" containerName="nova-cell1-conductor-db-sync" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050605 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-log" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050620 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" containerName="nova-metadata-metadata" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050640 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" containerName="dnsmasq-dns" Feb 23 13:30:54.054371 master-0 kubenswrapper[26474]: I0223 13:30:54.050657 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="044ee558-e330-4d4c-acbc-05bfdc4cb4e0" containerName="nova-manage" Feb 23 13:30:54.058361 master-0 kubenswrapper[26474]: I0223 13:30:54.056854 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:54.065373 master-0 kubenswrapper[26474]: I0223 13:30:54.062735 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:30:54.065373 master-0 kubenswrapper[26474]: I0223 13:30:54.062978 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 13:30:54.072372 master-0 kubenswrapper[26474]: I0223 13:30:54.069285 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66b4f9f77-rzm8t"] Feb 23 13:30:54.115394 master-0 kubenswrapper[26474]: I0223 13:30:54.114867 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.115394 master-0 kubenswrapper[26474]: I0223 13:30:54.114983 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6gj\" (UniqueName: \"kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.115394 master-0 kubenswrapper[26474]: I0223 13:30:54.115329 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.115696 master-0 kubenswrapper[26474]: I0223 13:30:54.115473 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.115696 master-0 kubenswrapper[26474]: I0223 13:30:54.115570 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.120470 master-0 kubenswrapper[26474]: I0223 13:30:54.118932 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:54.177364 master-0 kubenswrapper[26474]: I0223 13:30:54.176554 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:54.177364 master-0 kubenswrapper[26474]: I0223 13:30:54.176899 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f68966f3-dae8-4859-b7f2-254223a1506d" containerName="nova-scheduler-scheduler" containerID="cri-o://0a766aa92a6433dc5132a3adabcabcb9d421f3b79c7e8063bce58a4b2a5dd758" gracePeriod=30 Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.217350 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.217456 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.217474 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6gj\" (UniqueName: \"kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.217590 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.217701 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221178 master-0 kubenswrapper[26474]: I0223 13:30:54.218100 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221700 master-0 kubenswrapper[26474]: I0223 13:30:54.221379 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.221740 master-0 kubenswrapper[26474]: I0223 13:30:54.221718 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.226821 master-0 kubenswrapper[26474]: I0223 13:30:54.221778 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:30:54.226821 master-0 kubenswrapper[26474]: I0223 13:30:54.222860 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.226821 master-0 kubenswrapper[26474]: I0223 13:30:54.223454 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.226821 master-0 kubenswrapper[26474]: I0223 13:30:54.226198 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 13:30:54.246639 master-0 kubenswrapper[26474]: I0223 13:30:54.245353 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:30:54.249127 master-0 kubenswrapper[26474]: I0223 13:30:54.249093 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6gj\" (UniqueName: \"kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj\") pod \"nova-metadata-0\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " pod="openstack/nova-metadata-0" Feb 23 13:30:54.280257 master-0 kubenswrapper[26474]: I0223 13:30:54.279826 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:54.281100 master-0 kubenswrapper[26474]: I0223 13:30:54.280998 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-log" containerID="cri-o://c165ab7fbd2af4af2be2b14e37412712c52d623b2a2be312ddb777b15eaf042f" gracePeriod=30 Feb 23 13:30:54.281385 master-0 kubenswrapper[26474]: I0223 13:30:54.281255 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-api" containerID="cri-o://ada2a696023ab410ac9497a964bbe967248e248160c7965c1e6e8f78ee80e8bb" gracePeriod=30 Feb 23 13:30:54.317926 master-0 kubenswrapper[26474]: I0223 13:30:54.317355 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:54.319246 master-0 kubenswrapper[26474]: I0223 13:30:54.318708 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:54.322284 master-0 kubenswrapper[26474]: I0223 13:30:54.320714 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvll\" (UniqueName: \"kubernetes.io/projected/461d5a34-8400-467c-acc6-91ce6bad84ee-kube-api-access-kkvll\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.322946 master-0 kubenswrapper[26474]: I0223 13:30:54.322891 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.323365 master-0 kubenswrapper[26474]: I0223 13:30:54.323318 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.416301 master-0 kubenswrapper[26474]: I0223 13:30:54.416220 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600" path="/var/lib/kubelet/pods/ab3ea0ec-e66a-4dd0-b4ad-f3dbd3fad600/volumes" Feb 23 13:30:54.417501 master-0 kubenswrapper[26474]: I0223 13:30:54.417224 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e84d30-9468-4cb0-b244-9cf568e9a485" path="/var/lib/kubelet/pods/c6e84d30-9468-4cb0-b244-9cf568e9a485/volumes" Feb 23 13:30:54.418121 master-0 kubenswrapper[26474]: I0223 13:30:54.418015 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:54.418121 master-0 kubenswrapper[26474]: I0223 13:30:54.418052 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"4fe6487b-2c95-4029-b4da-8970da075f5d","Type":"ContainerStarted","Data":"6ff2a9d3e149601403461d713cd2d0174bf53c004fff6ea52089df8d79c7e210"} Feb 23 13:30:54.420258 master-0 kubenswrapper[26474]: I0223 13:30:54.420231 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"99525adb6b039120eb8d3e74da65436a1b5bc6d06e4a7ae0cb7dfe1bae149e4c"} Feb 23 13:30:54.426150 master-0 kubenswrapper[26474]: I0223 13:30:54.425365 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.426150 master-0 kubenswrapper[26474]: I0223 13:30:54.426068 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.426616 master-0 kubenswrapper[26474]: I0223 13:30:54.426277 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkvll\" (UniqueName: \"kubernetes.io/projected/461d5a34-8400-467c-acc6-91ce6bad84ee-kube-api-access-kkvll\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.431307 master-0 kubenswrapper[26474]: I0223 13:30:54.431267 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.433037 master-0 kubenswrapper[26474]: I0223 13:30:54.432673 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461d5a34-8400-467c-acc6-91ce6bad84ee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.447896 master-0 kubenswrapper[26474]: I0223 13:30:54.447013 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.232939717 podStartE2EDuration="19.446989752s" podCreationTimestamp="2026-02-23 13:30:35 +0000 UTC" firstStartedPulling="2026-02-23 13:30:36.68793746 +0000 UTC m=+958.534445137" lastFinishedPulling="2026-02-23 13:30:53.901987495 +0000 UTC m=+975.748495172" observedRunningTime="2026-02-23 13:30:54.440395531 +0000 UTC m=+976.286903208" watchObservedRunningTime="2026-02-23 13:30:54.446989752 +0000 UTC m=+976.293497429" Feb 23 13:30:54.483637 master-0 kubenswrapper[26474]: I0223 13:30:54.483522 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkvll\" (UniqueName: \"kubernetes.io/projected/461d5a34-8400-467c-acc6-91ce6bad84ee-kube-api-access-kkvll\") pod \"nova-cell1-conductor-0\" (UID: \"461d5a34-8400-467c-acc6-91ce6bad84ee\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:54.486075 master-0 kubenswrapper[26474]: I0223 13:30:54.486008 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 23 13:30:54.562091 master-0 kubenswrapper[26474]: I0223 13:30:54.548871 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:55.094666 master-0 kubenswrapper[26474]: I0223 13:30:55.094558 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:55.121323 master-0 kubenswrapper[26474]: W0223 13:30:55.120978 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeac6de05_2789_453f_9fad_c33ca4b8d2bd.slice/crio-4e7ed5fae951b9d1df80b43a36da080493e4dce7a760918923e52b6b72a0dfef WatchSource:0}: Error finding container 4e7ed5fae951b9d1df80b43a36da080493e4dce7a760918923e52b6b72a0dfef: Status 404 returned error can't find the container with id 4e7ed5fae951b9d1df80b43a36da080493e4dce7a760918923e52b6b72a0dfef Feb 23 13:30:55.238362 master-0 kubenswrapper[26474]: I0223 13:30:55.232025 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:30:55.454798 master-0 kubenswrapper[26474]: I0223 13:30:55.454682 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"461d5a34-8400-467c-acc6-91ce6bad84ee","Type":"ContainerStarted","Data":"fe7efb5d0466925fef74e3217a5017e28ef2534d16062bca46d83bcc26b49a72"} Feb 23 13:30:55.460075 master-0 kubenswrapper[26474]: I0223 13:30:55.459978 26474 generic.go:334] "Generic (PLEG): container finished" podID="f68966f3-dae8-4859-b7f2-254223a1506d" containerID="0a766aa92a6433dc5132a3adabcabcb9d421f3b79c7e8063bce58a4b2a5dd758" exitCode=0 Feb 23 13:30:55.460205 master-0 kubenswrapper[26474]: I0223 13:30:55.460119 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f68966f3-dae8-4859-b7f2-254223a1506d","Type":"ContainerDied","Data":"0a766aa92a6433dc5132a3adabcabcb9d421f3b79c7e8063bce58a4b2a5dd758"} Feb 23 13:30:55.467414 master-0 kubenswrapper[26474]: I0223 13:30:55.467323 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerStarted","Data":"4e7ed5fae951b9d1df80b43a36da080493e4dce7a760918923e52b6b72a0dfef"} Feb 23 13:30:55.467918 master-0 kubenswrapper[26474]: I0223 13:30:55.467890 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:55.484204 master-0 kubenswrapper[26474]: I0223 13:30:55.484117 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"29bb464770c3ef68b8945148249f42967ff726cf0cf9034a33bc40ddf1f4ace1"} Feb 23 13:30:55.484204 master-0 kubenswrapper[26474]: I0223 13:30:55.484205 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"4ddf946dd9d4404137db8c4db32f2cf252e69b26b995e59a5869a44c77108bdb"} Feb 23 13:30:55.488466 master-0 kubenswrapper[26474]: I0223 13:30:55.488416 26474 generic.go:334] "Generic (PLEG): container finished" podID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerID="c165ab7fbd2af4af2be2b14e37412712c52d623b2a2be312ddb777b15eaf042f" exitCode=143 Feb 23 13:30:55.489397 master-0 kubenswrapper[26474]: I0223 13:30:55.488523 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerDied","Data":"c165ab7fbd2af4af2be2b14e37412712c52d623b2a2be312ddb777b15eaf042f"} Feb 23 13:30:55.504737 master-0 kubenswrapper[26474]: I0223 13:30:55.503057 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccsdj\" (UniqueName: \"kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj\") pod \"f68966f3-dae8-4859-b7f2-254223a1506d\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " Feb 23 13:30:55.514282 master-0 kubenswrapper[26474]: I0223 13:30:55.514219 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle\") pod \"f68966f3-dae8-4859-b7f2-254223a1506d\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " Feb 23 13:30:55.514682 master-0 kubenswrapper[26474]: I0223 13:30:55.514655 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data\") pod \"f68966f3-dae8-4859-b7f2-254223a1506d\" (UID: \"f68966f3-dae8-4859-b7f2-254223a1506d\") " Feb 23 13:30:55.559810 master-0 kubenswrapper[26474]: I0223 13:30:55.559753 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj" (OuterVolumeSpecName: "kube-api-access-ccsdj") pod "f68966f3-dae8-4859-b7f2-254223a1506d" (UID: "f68966f3-dae8-4859-b7f2-254223a1506d"). InnerVolumeSpecName "kube-api-access-ccsdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:55.621822 master-0 kubenswrapper[26474]: I0223 13:30:55.616937 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f68966f3-dae8-4859-b7f2-254223a1506d" (UID: "f68966f3-dae8-4859-b7f2-254223a1506d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:55.621822 master-0 kubenswrapper[26474]: I0223 13:30:55.621094 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data" (OuterVolumeSpecName: "config-data") pod "f68966f3-dae8-4859-b7f2-254223a1506d" (UID: "f68966f3-dae8-4859-b7f2-254223a1506d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:55.639513 master-0 kubenswrapper[26474]: I0223 13:30:55.639439 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccsdj\" (UniqueName: \"kubernetes.io/projected/f68966f3-dae8-4859-b7f2-254223a1506d-kube-api-access-ccsdj\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:55.639513 master-0 kubenswrapper[26474]: I0223 13:30:55.639498 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:55.639513 master-0 kubenswrapper[26474]: I0223 13:30:55.639510 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68966f3-dae8-4859-b7f2-254223a1506d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:56.505569 master-0 kubenswrapper[26474]: I0223 13:30:56.505496 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0894ec00-986c-4930-9ec0-6163e1e6f410","Type":"ContainerStarted","Data":"07bb135d9cc3fd7f64c6f3ae0f78fd3fc1fd5c82ec603bfcf1a6acd82c0af1a9"} Feb 23 13:30:56.507459 master-0 kubenswrapper[26474]: I0223 13:30:56.507423 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 23 13:30:56.507569 master-0 kubenswrapper[26474]: I0223 13:30:56.507469 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 23 13:30:56.510584 master-0 kubenswrapper[26474]: I0223 13:30:56.510526 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"461d5a34-8400-467c-acc6-91ce6bad84ee","Type":"ContainerStarted","Data":"8b7eaab606a2b8bd92c249516b990ba48dda43dfc189e1b89ac22fede92119a5"} Feb 23 13:30:56.511827 master-0 kubenswrapper[26474]: I0223 13:30:56.511781 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 13:30:56.516108 master-0 kubenswrapper[26474]: I0223 13:30:56.516009 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f68966f3-dae8-4859-b7f2-254223a1506d","Type":"ContainerDied","Data":"56356c3c8f474820b9caee8f721090b3e8c21878b79c8aa53105f24171d2fb89"} Feb 23 13:30:56.516332 master-0 kubenswrapper[26474]: I0223 13:30:56.516148 26474 scope.go:117] "RemoveContainer" containerID="0a766aa92a6433dc5132a3adabcabcb9d421f3b79c7e8063bce58a4b2a5dd758" Feb 23 13:30:56.516332 master-0 kubenswrapper[26474]: I0223 13:30:56.516034 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:56.519926 master-0 kubenswrapper[26474]: I0223 13:30:56.519876 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerStarted","Data":"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a"} Feb 23 13:30:56.519926 master-0 kubenswrapper[26474]: I0223 13:30:56.519930 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerStarted","Data":"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068"} Feb 23 13:30:56.520335 master-0 kubenswrapper[26474]: I0223 13:30:56.520267 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-log" containerID="cri-o://179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" gracePeriod=30 Feb 23 13:30:56.520891 master-0 kubenswrapper[26474]: I0223 13:30:56.520456 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-metadata" containerID="cri-o://84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" gracePeriod=30 Feb 23 13:30:56.603861 master-0 kubenswrapper[26474]: I0223 13:30:56.603565 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=16.532995124 podStartE2EDuration="26.603543452s" podCreationTimestamp="2026-02-23 13:30:30 +0000 UTC" firstStartedPulling="2026-02-23 13:30:32.599882501 +0000 UTC m=+954.446390178" lastFinishedPulling="2026-02-23 13:30:42.670430829 +0000 UTC m=+964.516938506" observedRunningTime="2026-02-23 13:30:56.561881747 +0000 UTC m=+978.408389454" watchObservedRunningTime="2026-02-23 13:30:56.603543452 +0000 UTC m=+978.450051129" Feb 23 13:30:56.620849 master-0 kubenswrapper[26474]: I0223 13:30:56.620767 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.620743783 podStartE2EDuration="3.620743783s" podCreationTimestamp="2026-02-23 13:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:56.590222108 +0000 UTC m=+978.436729785" watchObservedRunningTime="2026-02-23 13:30:56.620743783 +0000 UTC m=+978.467251470" Feb 23 13:30:56.684628 master-0 kubenswrapper[26474]: I0223 13:30:56.684558 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:56.696708 master-0 kubenswrapper[26474]: I0223 13:30:56.696628 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:56.702687 master-0 kubenswrapper[26474]: I0223 13:30:56.702600 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.702581047 podStartE2EDuration="2.702581047s" podCreationTimestamp="2026-02-23 13:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:56.634682842 +0000 UTC m=+978.481190519" watchObservedRunningTime="2026-02-23 13:30:56.702581047 +0000 UTC m=+978.549088724" Feb 23 13:30:56.724395 master-0 kubenswrapper[26474]: I0223 13:30:56.722483 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:56.724395 master-0 kubenswrapper[26474]: E0223 13:30:56.723042 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f68966f3-dae8-4859-b7f2-254223a1506d" containerName="nova-scheduler-scheduler" Feb 23 13:30:56.724395 master-0 kubenswrapper[26474]: I0223 13:30:56.723058 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="f68966f3-dae8-4859-b7f2-254223a1506d" containerName="nova-scheduler-scheduler" Feb 23 13:30:56.724395 master-0 kubenswrapper[26474]: I0223 13:30:56.723462 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="f68966f3-dae8-4859-b7f2-254223a1506d" containerName="nova-scheduler-scheduler" Feb 23 13:30:56.724395 master-0 kubenswrapper[26474]: I0223 13:30:56.724186 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:56.729742 master-0 kubenswrapper[26474]: I0223 13:30:56.726673 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:30:56.753379 master-0 kubenswrapper[26474]: I0223 13:30:56.749817 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:56.785228 master-0 kubenswrapper[26474]: I0223 13:30:56.785159 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.785482 master-0 kubenswrapper[26474]: I0223 13:30:56.785436 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.785636 master-0 kubenswrapper[26474]: I0223 13:30:56.785598 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mb5\" (UniqueName: \"kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.897404 master-0 kubenswrapper[26474]: I0223 13:30:56.893173 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.897404 master-0 kubenswrapper[26474]: I0223 13:30:56.893495 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.897404 master-0 kubenswrapper[26474]: I0223 13:30:56.893566 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mb5\" (UniqueName: \"kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.897404 master-0 kubenswrapper[26474]: I0223 13:30:56.897307 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.916007 master-0 kubenswrapper[26474]: I0223 13:30:56.915794 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mb5\" (UniqueName: \"kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:56.921364 master-0 kubenswrapper[26474]: I0223 13:30:56.920979 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data\") pod \"nova-scheduler-0\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " pod="openstack/nova-scheduler-0" Feb 23 13:30:57.118282 master-0 kubenswrapper[26474]: I0223 13:30:57.116818 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:30:57.278457 master-0 kubenswrapper[26474]: I0223 13:30:57.278062 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:57.420433 master-0 kubenswrapper[26474]: I0223 13:30:57.418206 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs\") pod \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " Feb 23 13:30:57.420433 master-0 kubenswrapper[26474]: I0223 13:30:57.418421 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data\") pod \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " Feb 23 13:30:57.420433 master-0 kubenswrapper[26474]: I0223 13:30:57.418614 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs\") pod \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " Feb 23 13:30:57.420433 master-0 kubenswrapper[26474]: I0223 13:30:57.418754 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle\") pod \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " Feb 23 13:30:57.420433 master-0 kubenswrapper[26474]: I0223 13:30:57.418859 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6gj\" (UniqueName: \"kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj\") pod \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\" (UID: \"eac6de05-2789-453f-9fad-c33ca4b8d2bd\") " Feb 23 13:30:57.421145 master-0 kubenswrapper[26474]: I0223 13:30:57.421104 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs" (OuterVolumeSpecName: "logs") pod "eac6de05-2789-453f-9fad-c33ca4b8d2bd" (UID: "eac6de05-2789-453f-9fad-c33ca4b8d2bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:57.448404 master-0 kubenswrapper[26474]: I0223 13:30:57.448331 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj" (OuterVolumeSpecName: "kube-api-access-zm6gj") pod "eac6de05-2789-453f-9fad-c33ca4b8d2bd" (UID: "eac6de05-2789-453f-9fad-c33ca4b8d2bd"). InnerVolumeSpecName "kube-api-access-zm6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:57.452886 master-0 kubenswrapper[26474]: I0223 13:30:57.452790 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eac6de05-2789-453f-9fad-c33ca4b8d2bd" (UID: "eac6de05-2789-453f-9fad-c33ca4b8d2bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:57.475255 master-0 kubenswrapper[26474]: I0223 13:30:57.475151 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data" (OuterVolumeSpecName: "config-data") pod "eac6de05-2789-453f-9fad-c33ca4b8d2bd" (UID: "eac6de05-2789-453f-9fad-c33ca4b8d2bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:57.512879 master-0 kubenswrapper[26474]: I0223 13:30:57.512806 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "eac6de05-2789-453f-9fad-c33ca4b8d2bd" (UID: "eac6de05-2789-453f-9fad-c33ca4b8d2bd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:57.522432 master-0 kubenswrapper[26474]: I0223 13:30:57.522347 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac6de05-2789-453f-9fad-c33ca4b8d2bd-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:57.522812 master-0 kubenswrapper[26474]: I0223 13:30:57.522554 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:57.522812 master-0 kubenswrapper[26474]: I0223 13:30:57.522575 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6gj\" (UniqueName: \"kubernetes.io/projected/eac6de05-2789-453f-9fad-c33ca4b8d2bd-kube-api-access-zm6gj\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:57.522812 master-0 kubenswrapper[26474]: I0223 13:30:57.522587 26474 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:57.522812 master-0 kubenswrapper[26474]: I0223 13:30:57.522597 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac6de05-2789-453f-9fad-c33ca4b8d2bd-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:57.557685 master-0 kubenswrapper[26474]: I0223 13:30:57.556955 26474 generic.go:334] "Generic (PLEG): container finished" podID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerID="84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" exitCode=0 Feb 23 13:30:57.557685 master-0 kubenswrapper[26474]: I0223 13:30:57.556999 26474 generic.go:334] "Generic (PLEG): container finished" podID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerID="179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" exitCode=143 Feb 23 13:30:57.557685 master-0 kubenswrapper[26474]: I0223 13:30:57.557096 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:57.558026 master-0 kubenswrapper[26474]: I0223 13:30:57.557791 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerDied","Data":"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a"} Feb 23 13:30:57.558026 master-0 kubenswrapper[26474]: I0223 13:30:57.557847 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerDied","Data":"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068"} Feb 23 13:30:57.558026 master-0 kubenswrapper[26474]: I0223 13:30:57.557857 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eac6de05-2789-453f-9fad-c33ca4b8d2bd","Type":"ContainerDied","Data":"4e7ed5fae951b9d1df80b43a36da080493e4dce7a760918923e52b6b72a0dfef"} Feb 23 13:30:57.558026 master-0 kubenswrapper[26474]: I0223 13:30:57.557876 26474 scope.go:117] "RemoveContainer" containerID="84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" Feb 23 13:30:57.566535 master-0 kubenswrapper[26474]: I0223 13:30:57.561547 26474 generic.go:334] "Generic (PLEG): container finished" podID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerID="ada2a696023ab410ac9497a964bbe967248e248160c7965c1e6e8f78ee80e8bb" exitCode=0 Feb 23 13:30:57.566535 master-0 kubenswrapper[26474]: I0223 13:30:57.561611 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerDied","Data":"ada2a696023ab410ac9497a964bbe967248e248160c7965c1e6e8f78ee80e8bb"} Feb 23 13:30:57.612115 master-0 kubenswrapper[26474]: I0223 13:30:57.610308 26474 scope.go:117] "RemoveContainer" containerID="179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" Feb 23 13:30:57.614862 master-0 kubenswrapper[26474]: I0223 13:30:57.614815 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:30:57.660584 master-0 kubenswrapper[26474]: I0223 13:30:57.660524 26474 scope.go:117] "RemoveContainer" containerID="84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" Feb 23 13:30:57.661075 master-0 kubenswrapper[26474]: E0223 13:30:57.661021 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a\": container with ID starting with 84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a not found: ID does not exist" containerID="84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" Feb 23 13:30:57.661123 master-0 kubenswrapper[26474]: I0223 13:30:57.661082 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a"} err="failed to get container status \"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a\": rpc error: code = NotFound desc = could not find container \"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a\": container with ID starting with 84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a not found: ID does not exist" Feb 23 13:30:57.661123 master-0 kubenswrapper[26474]: I0223 13:30:57.661115 26474 scope.go:117] "RemoveContainer" containerID="179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" Feb 23 13:30:57.661776 master-0 kubenswrapper[26474]: E0223 13:30:57.661736 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068\": container with ID starting with 179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068 not found: ID does not exist" containerID="179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" Feb 23 13:30:57.661838 master-0 kubenswrapper[26474]: I0223 13:30:57.661784 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068"} err="failed to get container status \"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068\": rpc error: code = NotFound desc = could not find container \"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068\": container with ID starting with 179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068 not found: ID does not exist" Feb 23 13:30:57.661838 master-0 kubenswrapper[26474]: I0223 13:30:57.661814 26474 scope.go:117] "RemoveContainer" containerID="84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a" Feb 23 13:30:57.664386 master-0 kubenswrapper[26474]: I0223 13:30:57.664286 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a"} err="failed to get container status \"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a\": rpc error: code = NotFound desc = could not find container \"84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a\": container with ID starting with 84b639631dfba070608fc941ae4911da772d6aa412484a6177455d307c35617a not found: ID does not exist" Feb 23 13:30:57.664386 master-0 kubenswrapper[26474]: I0223 13:30:57.664330 26474 scope.go:117] "RemoveContainer" containerID="179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068" Feb 23 13:30:57.664774 master-0 kubenswrapper[26474]: I0223 13:30:57.664735 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068"} err="failed to get container status \"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068\": rpc error: code = NotFound desc = could not find container \"179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068\": container with ID starting with 179cf8ff2a72bc857764885d82f9d052471a33977f47344fe5f6399123e0f068 not found: ID does not exist" Feb 23 13:30:57.673436 master-0 kubenswrapper[26474]: I0223 13:30:57.672091 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:57.716882 master-0 kubenswrapper[26474]: I0223 13:30:57.715329 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:57.736355 master-0 kubenswrapper[26474]: I0223 13:30:57.736140 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:57.737022 master-0 kubenswrapper[26474]: E0223 13:30:57.736995 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-metadata" Feb 23 13:30:57.737022 master-0 kubenswrapper[26474]: I0223 13:30:57.737021 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-metadata" Feb 23 13:30:57.737208 master-0 kubenswrapper[26474]: E0223 13:30:57.737127 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-log" Feb 23 13:30:57.737208 master-0 kubenswrapper[26474]: I0223 13:30:57.737136 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-log" Feb 23 13:30:57.737545 master-0 kubenswrapper[26474]: I0223 13:30:57.737506 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-log" Feb 23 13:30:57.737545 master-0 kubenswrapper[26474]: I0223 13:30:57.737536 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" containerName="nova-metadata-metadata" Feb 23 13:30:57.740168 master-0 kubenswrapper[26474]: I0223 13:30:57.740131 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:57.743433 master-0 kubenswrapper[26474]: I0223 13:30:57.743395 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:30:57.743694 master-0 kubenswrapper[26474]: I0223 13:30:57.743663 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 13:30:57.749251 master-0 kubenswrapper[26474]: I0223 13:30:57.749194 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:57.833947 master-0 kubenswrapper[26474]: I0223 13:30:57.833889 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.834051 master-0 kubenswrapper[26474]: I0223 13:30:57.834005 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.834117 master-0 kubenswrapper[26474]: I0223 13:30:57.834052 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.834158 master-0 kubenswrapper[26474]: I0223 13:30:57.834126 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.834497 master-0 kubenswrapper[26474]: I0223 13:30:57.834468 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjbp\" (UniqueName: \"kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.942388 master-0 kubenswrapper[26474]: I0223 13:30:57.941241 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.942976 master-0 kubenswrapper[26474]: I0223 13:30:57.942916 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjbp\" (UniqueName: \"kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.943664 master-0 kubenswrapper[26474]: I0223 13:30:57.943306 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.944437 master-0 kubenswrapper[26474]: I0223 13:30:57.944279 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.958452 master-0 kubenswrapper[26474]: I0223 13:30:57.956128 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.971409 master-0 kubenswrapper[26474]: I0223 13:30:57.970940 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.971409 master-0 kubenswrapper[26474]: I0223 13:30:57.971056 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.975769 master-0 kubenswrapper[26474]: I0223 13:30:57.975638 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjbp\" (UniqueName: \"kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:57.981580 master-0 kubenswrapper[26474]: I0223 13:30:57.976331 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:58.004897 master-0 kubenswrapper[26474]: I0223 13:30:58.004852 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " pod="openstack/nova-metadata-0" Feb 23 13:30:58.106285 master-0 kubenswrapper[26474]: I0223 13:30:58.106214 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:30:58.145002 master-0 kubenswrapper[26474]: I0223 13:30:58.144954 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:58.282364 master-0 kubenswrapper[26474]: I0223 13:30:58.282017 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data\") pod \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " Feb 23 13:30:58.282364 master-0 kubenswrapper[26474]: I0223 13:30:58.282159 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs\") pod \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " Feb 23 13:30:58.282364 master-0 kubenswrapper[26474]: I0223 13:30:58.282216 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle\") pod \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " Feb 23 13:30:58.282673 master-0 kubenswrapper[26474]: I0223 13:30:58.282533 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxqzn\" (UniqueName: \"kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn\") pod \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\" (UID: \"5748884f-7269-4db7-9bfa-e80b7a3f1f1c\") " Feb 23 13:30:58.286207 master-0 kubenswrapper[26474]: I0223 13:30:58.283242 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs" (OuterVolumeSpecName: "logs") pod "5748884f-7269-4db7-9bfa-e80b7a3f1f1c" (UID: "5748884f-7269-4db7-9bfa-e80b7a3f1f1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:58.289360 master-0 kubenswrapper[26474]: I0223 13:30:58.286695 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn" (OuterVolumeSpecName: "kube-api-access-sxqzn") pod "5748884f-7269-4db7-9bfa-e80b7a3f1f1c" (UID: "5748884f-7269-4db7-9bfa-e80b7a3f1f1c"). InnerVolumeSpecName "kube-api-access-sxqzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:58.328363 master-0 kubenswrapper[26474]: I0223 13:30:58.327608 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5748884f-7269-4db7-9bfa-e80b7a3f1f1c" (UID: "5748884f-7269-4db7-9bfa-e80b7a3f1f1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:58.333357 master-0 kubenswrapper[26474]: I0223 13:30:58.332481 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data" (OuterVolumeSpecName: "config-data") pod "5748884f-7269-4db7-9bfa-e80b7a3f1f1c" (UID: "5748884f-7269-4db7-9bfa-e80b7a3f1f1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:58.388572 master-0 kubenswrapper[26474]: I0223 13:30:58.385979 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxqzn\" (UniqueName: \"kubernetes.io/projected/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-kube-api-access-sxqzn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:58.388572 master-0 kubenswrapper[26474]: I0223 13:30:58.386040 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:58.388572 master-0 kubenswrapper[26474]: I0223 13:30:58.386058 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:58.388572 master-0 kubenswrapper[26474]: I0223 13:30:58.386072 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5748884f-7269-4db7-9bfa-e80b7a3f1f1c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:58.423369 master-0 kubenswrapper[26474]: I0223 13:30:58.423234 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac6de05-2789-453f-9fad-c33ca4b8d2bd" path="/var/lib/kubelet/pods/eac6de05-2789-453f-9fad-c33ca4b8d2bd/volumes" Feb 23 13:30:58.429361 master-0 kubenswrapper[26474]: I0223 13:30:58.424298 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f68966f3-dae8-4859-b7f2-254223a1506d" path="/var/lib/kubelet/pods/f68966f3-dae8-4859-b7f2-254223a1506d/volumes" Feb 23 13:30:58.581017 master-0 kubenswrapper[26474]: I0223 13:30:58.580940 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5748884f-7269-4db7-9bfa-e80b7a3f1f1c","Type":"ContainerDied","Data":"2bbd948cd01dfc5b292b599d13640cfae2f72e586c909ea2da9f8ddc36638ec4"} Feb 23 13:30:58.581525 master-0 kubenswrapper[26474]: I0223 13:30:58.580976 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:58.581525 master-0 kubenswrapper[26474]: I0223 13:30:58.581027 26474 scope.go:117] "RemoveContainer" containerID="ada2a696023ab410ac9497a964bbe967248e248160c7965c1e6e8f78ee80e8bb" Feb 23 13:30:58.587878 master-0 kubenswrapper[26474]: I0223 13:30:58.587824 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37c3e157-78c6-4d54-9bf5-9ed12e893297","Type":"ContainerStarted","Data":"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9"} Feb 23 13:30:58.587952 master-0 kubenswrapper[26474]: I0223 13:30:58.587879 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37c3e157-78c6-4d54-9bf5-9ed12e893297","Type":"ContainerStarted","Data":"688c5a2145cf75bb4ff92eaeec2bd6cf5e21b03bfea0c45bf3569b2ad5c416e7"} Feb 23 13:30:58.615525 master-0 kubenswrapper[26474]: I0223 13:30:58.615463 26474 scope.go:117] "RemoveContainer" containerID="c165ab7fbd2af4af2be2b14e37412712c52d623b2a2be312ddb777b15eaf042f" Feb 23 13:30:58.622553 master-0 kubenswrapper[26474]: I0223 13:30:58.622388 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.622363723 podStartE2EDuration="2.622363723s" podCreationTimestamp="2026-02-23 13:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:58.605888792 +0000 UTC m=+980.452396489" watchObservedRunningTime="2026-02-23 13:30:58.622363723 +0000 UTC m=+980.468871400" Feb 23 13:30:58.647726 master-0 kubenswrapper[26474]: I0223 13:30:58.647646 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 23 13:30:58.661565 master-0 kubenswrapper[26474]: I0223 13:30:58.661453 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:58.696374 master-0 kubenswrapper[26474]: I0223 13:30:58.694465 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: I0223 13:30:58.741410 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: E0223 13:30:58.742039 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-api" Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: I0223 13:30:58.742055 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-api" Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: E0223 13:30:58.742110 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-log" Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: I0223 13:30:58.742117 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-log" Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: I0223 13:30:58.742666 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-api" Feb 23 13:30:58.742708 master-0 kubenswrapper[26474]: I0223 13:30:58.742693 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" containerName="nova-api-log" Feb 23 13:30:58.753215 master-0 kubenswrapper[26474]: I0223 13:30:58.753168 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:58.756805 master-0 kubenswrapper[26474]: I0223 13:30:58.756620 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:30:58.776369 master-0 kubenswrapper[26474]: I0223 13:30:58.775796 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:58.799513 master-0 kubenswrapper[26474]: I0223 13:30:58.799453 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.799797 master-0 kubenswrapper[26474]: I0223 13:30:58.799780 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.799890 master-0 kubenswrapper[26474]: I0223 13:30:58.799875 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.800088 master-0 kubenswrapper[26474]: I0223 13:30:58.800073 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr4l\" (UniqueName: \"kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.801440 master-0 kubenswrapper[26474]: I0223 13:30:58.801409 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:30:58.901240 master-0 kubenswrapper[26474]: I0223 13:30:58.901170 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dr4l\" (UniqueName: \"kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.905039 master-0 kubenswrapper[26474]: I0223 13:30:58.904999 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.905210 master-0 kubenswrapper[26474]: I0223 13:30:58.905195 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.905315 master-0 kubenswrapper[26474]: I0223 13:30:58.905302 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.908007 master-0 kubenswrapper[26474]: I0223 13:30:58.907288 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.909388 master-0 kubenswrapper[26474]: I0223 13:30:58.909370 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.909791 master-0 kubenswrapper[26474]: I0223 13:30:58.909755 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:58.936174 master-0 kubenswrapper[26474]: I0223 13:30:58.936131 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dr4l\" (UniqueName: \"kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l\") pod \"nova-api-0\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " pod="openstack/nova-api-0" Feb 23 13:30:59.075751 master-0 kubenswrapper[26474]: I0223 13:30:59.075585 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:30:59.559152 master-0 kubenswrapper[26474]: W0223 13:30:59.559078 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb56eecb_ab6f_48ef_bb82_ba3a605b72d1.slice/crio-e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56 WatchSource:0}: Error finding container e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56: Status 404 returned error can't find the container with id e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56 Feb 23 13:30:59.568578 master-0 kubenswrapper[26474]: I0223 13:30:59.565392 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:30:59.598521 master-0 kubenswrapper[26474]: I0223 13:30:59.598381 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerStarted","Data":"e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56"} Feb 23 13:30:59.602262 master-0 kubenswrapper[26474]: I0223 13:30:59.602129 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerStarted","Data":"cf5631018557db759c7b8824aa3f2c163dd11007f294dce6a6e0c75cc64e9391"} Feb 23 13:30:59.602262 master-0 kubenswrapper[26474]: I0223 13:30:59.602203 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerStarted","Data":"3a16901f9d43b9ca07c800b6b2c486dcca7c3ee7411f065ebdb99dc0965d96e7"} Feb 23 13:30:59.602262 master-0 kubenswrapper[26474]: I0223 13:30:59.602217 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerStarted","Data":"58a2c29198caaccbf97e3282e02eab9b8314a0e09736da106c73595467d0ee18"} Feb 23 13:30:59.609052 master-0 kubenswrapper[26474]: I0223 13:30:59.609002 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 23 13:30:59.670377 master-0 kubenswrapper[26474]: I0223 13:30:59.670255 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.670233312 podStartE2EDuration="2.670233312s" podCreationTimestamp="2026-02-23 13:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:59.654647662 +0000 UTC m=+981.501155359" watchObservedRunningTime="2026-02-23 13:30:59.670233312 +0000 UTC m=+981.516740989" Feb 23 13:31:00.415148 master-0 kubenswrapper[26474]: I0223 13:31:00.415082 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5748884f-7269-4db7-9bfa-e80b7a3f1f1c" path="/var/lib/kubelet/pods/5748884f-7269-4db7-9bfa-e80b7a3f1f1c/volumes" Feb 23 13:31:00.619840 master-0 kubenswrapper[26474]: I0223 13:31:00.619777 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerStarted","Data":"f8cb3139fd1f37714ba2920438ee197b026b137f0729d65acdde39b3e39285fa"} Feb 23 13:31:00.619840 master-0 kubenswrapper[26474]: I0223 13:31:00.619841 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerStarted","Data":"7d0c9f77d42982e3f6c4c2535ecef555447834781b47a9a83f1a6d55f2f9cf00"} Feb 23 13:31:00.640949 master-0 kubenswrapper[26474]: I0223 13:31:00.640848 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.640824845 podStartE2EDuration="2.640824845s" podCreationTimestamp="2026-02-23 13:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:00.639540364 +0000 UTC m=+982.486048051" watchObservedRunningTime="2026-02-23 13:31:00.640824845 +0000 UTC m=+982.487332522" Feb 23 13:31:01.036918 master-0 kubenswrapper[26474]: I0223 13:31:01.036832 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.036918 master-0 kubenswrapper[26474]: I0223 13:31:01.036881 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.036918 master-0 kubenswrapper[26474]: I0223 13:31:01.036894 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.036918 master-0 kubenswrapper[26474]: I0223 13:31:01.036904 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.057056 master-0 kubenswrapper[26474]: I0223 13:31:01.056961 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.059746 master-0 kubenswrapper[26474]: I0223 13:31:01.059628 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.643845 master-0 kubenswrapper[26474]: I0223 13:31:01.643776 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 23 13:31:01.648200 master-0 kubenswrapper[26474]: I0223 13:31:01.647198 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 23 13:31:02.121267 master-0 kubenswrapper[26474]: I0223 13:31:02.121187 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:31:03.107462 master-0 kubenswrapper[26474]: I0223 13:31:03.107400 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:31:03.107462 master-0 kubenswrapper[26474]: I0223 13:31:03.107464 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:31:04.590540 master-0 kubenswrapper[26474]: I0223 13:31:04.590479 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 13:31:07.117801 master-0 kubenswrapper[26474]: I0223 13:31:07.117704 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:31:07.155413 master-0 kubenswrapper[26474]: I0223 13:31:07.155331 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:31:08.073364 master-0 kubenswrapper[26474]: I0223 13:31:08.072861 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:31:08.107859 master-0 kubenswrapper[26474]: I0223 13:31:08.107794 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:31:08.107859 master-0 kubenswrapper[26474]: I0223 13:31:08.107860 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:31:09.076797 master-0 kubenswrapper[26474]: I0223 13:31:09.076703 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:31:09.076797 master-0 kubenswrapper[26474]: I0223 13:31:09.076814 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:31:09.130369 master-0 kubenswrapper[26474]: I0223 13:31:09.129524 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:09.130369 master-0 kubenswrapper[26474]: I0223 13:31:09.129550 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:10.158894 master-0 kubenswrapper[26474]: I0223 13:31:10.158669 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:10.158894 master-0 kubenswrapper[26474]: I0223 13:31:10.158718 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:14.622599 master-0 kubenswrapper[26474]: I0223 13:31:14.622517 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:14.657905 master-0 kubenswrapper[26474]: I0223 13:31:14.657806 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data\") pod \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " Feb 23 13:31:14.658128 master-0 kubenswrapper[26474]: I0223 13:31:14.658008 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle\") pod \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " Feb 23 13:31:14.658128 master-0 kubenswrapper[26474]: I0223 13:31:14.658088 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzwps\" (UniqueName: \"kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps\") pod \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\" (UID: \"77f637a9-bbed-47c0-8a4f-13ebc89047f9\") " Feb 23 13:31:14.668226 master-0 kubenswrapper[26474]: I0223 13:31:14.667937 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps" (OuterVolumeSpecName: "kube-api-access-pzwps") pod "77f637a9-bbed-47c0-8a4f-13ebc89047f9" (UID: "77f637a9-bbed-47c0-8a4f-13ebc89047f9"). InnerVolumeSpecName "kube-api-access-pzwps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:14.709447 master-0 kubenswrapper[26474]: I0223 13:31:14.709378 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data" (OuterVolumeSpecName: "config-data") pod "77f637a9-bbed-47c0-8a4f-13ebc89047f9" (UID: "77f637a9-bbed-47c0-8a4f-13ebc89047f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:14.711435 master-0 kubenswrapper[26474]: I0223 13:31:14.711371 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77f637a9-bbed-47c0-8a4f-13ebc89047f9" (UID: "77f637a9-bbed-47c0-8a4f-13ebc89047f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:14.763052 master-0 kubenswrapper[26474]: I0223 13:31:14.762811 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:14.763052 master-0 kubenswrapper[26474]: I0223 13:31:14.762861 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f637a9-bbed-47c0-8a4f-13ebc89047f9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:14.763052 master-0 kubenswrapper[26474]: I0223 13:31:14.762878 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzwps\" (UniqueName: \"kubernetes.io/projected/77f637a9-bbed-47c0-8a4f-13ebc89047f9-kube-api-access-pzwps\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:15.122762 master-0 kubenswrapper[26474]: I0223 13:31:15.122635 26474 generic.go:334] "Generic (PLEG): container finished" podID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" containerID="7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed" exitCode=137 Feb 23 13:31:15.122762 master-0 kubenswrapper[26474]: I0223 13:31:15.122699 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77f637a9-bbed-47c0-8a4f-13ebc89047f9","Type":"ContainerDied","Data":"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed"} Feb 23 13:31:15.122762 master-0 kubenswrapper[26474]: I0223 13:31:15.122735 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77f637a9-bbed-47c0-8a4f-13ebc89047f9","Type":"ContainerDied","Data":"d34e5c38bf0727a6e2c436118005aecdb8060c43dfe255a88b3f53063d31ae28"} Feb 23 13:31:15.122762 master-0 kubenswrapper[26474]: I0223 13:31:15.122744 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.123137 master-0 kubenswrapper[26474]: I0223 13:31:15.122754 26474 scope.go:117] "RemoveContainer" containerID="7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed" Feb 23 13:31:15.166224 master-0 kubenswrapper[26474]: I0223 13:31:15.166133 26474 scope.go:117] "RemoveContainer" containerID="7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed" Feb 23 13:31:15.166883 master-0 kubenswrapper[26474]: E0223 13:31:15.166812 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed\": container with ID starting with 7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed not found: ID does not exist" containerID="7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed" Feb 23 13:31:15.166933 master-0 kubenswrapper[26474]: I0223 13:31:15.166898 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed"} err="failed to get container status \"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed\": rpc error: code = NotFound desc = could not find container \"7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed\": container with ID starting with 7a499b156757afea31a3958dc1396d1e01eea421d91d83d17b341ec6840823ed not found: ID does not exist" Feb 23 13:31:15.219363 master-0 kubenswrapper[26474]: I0223 13:31:15.219246 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:31:15.236474 master-0 kubenswrapper[26474]: I0223 13:31:15.236401 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:31:15.269190 master-0 kubenswrapper[26474]: I0223 13:31:15.268636 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:31:15.270016 master-0 kubenswrapper[26474]: E0223 13:31:15.269959 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:31:15.270016 master-0 kubenswrapper[26474]: I0223 13:31:15.270013 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:31:15.270727 master-0 kubenswrapper[26474]: I0223 13:31:15.270677 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:31:15.272912 master-0 kubenswrapper[26474]: I0223 13:31:15.272850 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.275956 master-0 kubenswrapper[26474]: I0223 13:31:15.275890 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 13:31:15.276205 master-0 kubenswrapper[26474]: I0223 13:31:15.276171 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 13:31:15.278827 master-0 kubenswrapper[26474]: I0223 13:31:15.278769 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 13:31:15.289645 master-0 kubenswrapper[26474]: I0223 13:31:15.289383 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:31:15.379731 master-0 kubenswrapper[26474]: I0223 13:31:15.379569 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.379918 master-0 kubenswrapper[26474]: I0223 13:31:15.379765 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82xm\" (UniqueName: \"kubernetes.io/projected/848d5b91-e789-4b14-b3a0-347ff6e6656b-kube-api-access-p82xm\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.379918 master-0 kubenswrapper[26474]: I0223 13:31:15.379881 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.379984 master-0 kubenswrapper[26474]: I0223 13:31:15.379937 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.380028 master-0 kubenswrapper[26474]: I0223 13:31:15.380003 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.482791 master-0 kubenswrapper[26474]: I0223 13:31:15.482688 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.483028 master-0 kubenswrapper[26474]: I0223 13:31:15.482873 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.483028 master-0 kubenswrapper[26474]: I0223 13:31:15.483011 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.483196 master-0 kubenswrapper[26474]: I0223 13:31:15.483162 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82xm\" (UniqueName: \"kubernetes.io/projected/848d5b91-e789-4b14-b3a0-347ff6e6656b-kube-api-access-p82xm\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.483417 master-0 kubenswrapper[26474]: I0223 13:31:15.483313 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.488237 master-0 kubenswrapper[26474]: I0223 13:31:15.488198 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.488678 master-0 kubenswrapper[26474]: I0223 13:31:15.488626 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.488747 master-0 kubenswrapper[26474]: I0223 13:31:15.488655 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.489350 master-0 kubenswrapper[26474]: I0223 13:31:15.489306 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848d5b91-e789-4b14-b3a0-347ff6e6656b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.538620 master-0 kubenswrapper[26474]: I0223 13:31:15.538546 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82xm\" (UniqueName: \"kubernetes.io/projected/848d5b91-e789-4b14-b3a0-347ff6e6656b-kube-api-access-p82xm\") pod \"nova-cell1-novncproxy-0\" (UID: \"848d5b91-e789-4b14-b3a0-347ff6e6656b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:15.599980 master-0 kubenswrapper[26474]: I0223 13:31:15.599880 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:16.131688 master-0 kubenswrapper[26474]: I0223 13:31:16.131602 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:31:16.411521 master-0 kubenswrapper[26474]: I0223 13:31:16.411352 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f637a9-bbed-47c0-8a4f-13ebc89047f9" path="/var/lib/kubelet/pods/77f637a9-bbed-47c0-8a4f-13ebc89047f9/volumes" Feb 23 13:31:17.164217 master-0 kubenswrapper[26474]: I0223 13:31:17.163923 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"848d5b91-e789-4b14-b3a0-347ff6e6656b","Type":"ContainerStarted","Data":"fb33b38629011dadc71f6bbd1e5ea049a62833a13eeb8167fc7aa554ab6677e9"} Feb 23 13:31:17.164217 master-0 kubenswrapper[26474]: I0223 13:31:17.163993 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"848d5b91-e789-4b14-b3a0-347ff6e6656b","Type":"ContainerStarted","Data":"5c1cbf0c5b1698b2532886cdde19dbf69e67c062fe2f424e8fbaf95871f6c44d"} Feb 23 13:31:17.212529 master-0 kubenswrapper[26474]: I0223 13:31:17.212292 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.212264971 podStartE2EDuration="2.212264971s" podCreationTimestamp="2026-02-23 13:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:17.207767672 +0000 UTC m=+999.054275359" watchObservedRunningTime="2026-02-23 13:31:17.212264971 +0000 UTC m=+999.058772658" Feb 23 13:31:18.116823 master-0 kubenswrapper[26474]: I0223 13:31:18.116606 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:31:18.117171 master-0 kubenswrapper[26474]: I0223 13:31:18.117064 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:31:18.131418 master-0 kubenswrapper[26474]: I0223 13:31:18.130996 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:31:18.206502 master-0 kubenswrapper[26474]: I0223 13:31:18.206443 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:31:19.080424 master-0 kubenswrapper[26474]: I0223 13:31:19.080341 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:31:19.081009 master-0 kubenswrapper[26474]: I0223 13:31:19.080955 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:31:19.081735 master-0 kubenswrapper[26474]: I0223 13:31:19.081657 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:31:19.084045 master-0 kubenswrapper[26474]: I0223 13:31:19.084004 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:31:19.205372 master-0 kubenswrapper[26474]: I0223 13:31:19.205115 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:31:19.209915 master-0 kubenswrapper[26474]: I0223 13:31:19.209646 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:31:19.546722 master-0 kubenswrapper[26474]: I0223 13:31:19.544607 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dfc76fc6c-ffppf"] Feb 23 13:31:19.548210 master-0 kubenswrapper[26474]: I0223 13:31:19.548156 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.565489 master-0 kubenswrapper[26474]: I0223 13:31:19.565032 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfc76fc6c-ffppf"] Feb 23 13:31:19.723743 master-0 kubenswrapper[26474]: I0223 13:31:19.723575 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-swift-storage-0\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.723743 master-0 kubenswrapper[26474]: I0223 13:31:19.723704 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-config\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.724100 master-0 kubenswrapper[26474]: I0223 13:31:19.724034 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75r8\" (UniqueName: \"kubernetes.io/projected/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-kube-api-access-m75r8\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.724340 master-0 kubenswrapper[26474]: I0223 13:31:19.724300 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.724529 master-0 kubenswrapper[26474]: I0223 13:31:19.724502 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.724686 master-0 kubenswrapper[26474]: I0223 13:31:19.724658 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-svc\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.827600 master-0 kubenswrapper[26474]: I0223 13:31:19.827495 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-svc\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.827882 master-0 kubenswrapper[26474]: I0223 13:31:19.827691 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-swift-storage-0\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.827882 master-0 kubenswrapper[26474]: I0223 13:31:19.827809 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-config\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.827968 master-0 kubenswrapper[26474]: I0223 13:31:19.827936 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75r8\" (UniqueName: \"kubernetes.io/projected/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-kube-api-access-m75r8\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.828026 master-0 kubenswrapper[26474]: I0223 13:31:19.828002 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.828102 master-0 kubenswrapper[26474]: I0223 13:31:19.828071 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.829328 master-0 kubenswrapper[26474]: I0223 13:31:19.829273 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-swift-storage-0\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.829684 master-0 kubenswrapper[26474]: I0223 13:31:19.829630 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-config\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.829853 master-0 kubenswrapper[26474]: I0223 13:31:19.829801 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.830263 master-0 kubenswrapper[26474]: I0223 13:31:19.830167 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.830680 master-0 kubenswrapper[26474]: I0223 13:31:19.830485 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-dns-svc\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.854211 master-0 kubenswrapper[26474]: I0223 13:31:19.854131 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75r8\" (UniqueName: \"kubernetes.io/projected/180291fa-ce8f-42e3-92c5-ec2c2bcb2e06-kube-api-access-m75r8\") pod \"dnsmasq-dns-7dfc76fc6c-ffppf\" (UID: \"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06\") " pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:19.899441 master-0 kubenswrapper[26474]: I0223 13:31:19.899320 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:20.535140 master-0 kubenswrapper[26474]: I0223 13:31:20.534998 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfc76fc6c-ffppf"] Feb 23 13:31:20.606664 master-0 kubenswrapper[26474]: I0223 13:31:20.606589 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:21.362750 master-0 kubenswrapper[26474]: I0223 13:31:21.358276 26474 generic.go:334] "Generic (PLEG): container finished" podID="180291fa-ce8f-42e3-92c5-ec2c2bcb2e06" containerID="3d297516e5e164e08c597614d22b1925d3eca33b1609215e158c1fec76ccd6eb" exitCode=0 Feb 23 13:31:21.362750 master-0 kubenswrapper[26474]: I0223 13:31:21.358516 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" event={"ID":"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06","Type":"ContainerDied","Data":"3d297516e5e164e08c597614d22b1925d3eca33b1609215e158c1fec76ccd6eb"} Feb 23 13:31:21.362750 master-0 kubenswrapper[26474]: I0223 13:31:21.358548 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" event={"ID":"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06","Type":"ContainerStarted","Data":"455360ca9085a0b53802b28692d1ff57415da536782704a6651015c8df600fab"} Feb 23 13:31:22.380450 master-0 kubenswrapper[26474]: I0223 13:31:22.380279 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" event={"ID":"180291fa-ce8f-42e3-92c5-ec2c2bcb2e06","Type":"ContainerStarted","Data":"0edaa155e6acb2eac83aab1d306884430afa24c5b239f061c9d89f542a8c75d2"} Feb 23 13:31:22.381318 master-0 kubenswrapper[26474]: I0223 13:31:22.381200 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:22.413450 master-0 kubenswrapper[26474]: I0223 13:31:22.413302 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" podStartSLOduration=3.413279147 podStartE2EDuration="3.413279147s" podCreationTimestamp="2026-02-23 13:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:22.408536642 +0000 UTC m=+1004.255044359" watchObservedRunningTime="2026-02-23 13:31:22.413279147 +0000 UTC m=+1004.259786824" Feb 23 13:31:22.738697 master-0 kubenswrapper[26474]: I0223 13:31:22.738509 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:22.739026 master-0 kubenswrapper[26474]: I0223 13:31:22.738854 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-log" containerID="cri-o://7d0c9f77d42982e3f6c4c2535ecef555447834781b47a9a83f1a6d55f2f9cf00" gracePeriod=30 Feb 23 13:31:22.739599 master-0 kubenswrapper[26474]: I0223 13:31:22.739514 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-api" containerID="cri-o://f8cb3139fd1f37714ba2920438ee197b026b137f0729d65acdde39b3e39285fa" gracePeriod=30 Feb 23 13:31:23.392760 master-0 kubenswrapper[26474]: I0223 13:31:23.392690 26474 generic.go:334] "Generic (PLEG): container finished" podID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerID="7d0c9f77d42982e3f6c4c2535ecef555447834781b47a9a83f1a6d55f2f9cf00" exitCode=143 Feb 23 13:31:23.392760 master-0 kubenswrapper[26474]: I0223 13:31:23.392743 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerDied","Data":"7d0c9f77d42982e3f6c4c2535ecef555447834781b47a9a83f1a6d55f2f9cf00"} Feb 23 13:31:25.601144 master-0 kubenswrapper[26474]: I0223 13:31:25.601002 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:25.622861 master-0 kubenswrapper[26474]: I0223 13:31:25.622761 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:26.446085 master-0 kubenswrapper[26474]: I0223 13:31:26.445894 26474 generic.go:334] "Generic (PLEG): container finished" podID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerID="f8cb3139fd1f37714ba2920438ee197b026b137f0729d65acdde39b3e39285fa" exitCode=0 Feb 23 13:31:26.446085 master-0 kubenswrapper[26474]: I0223 13:31:26.445955 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerDied","Data":"f8cb3139fd1f37714ba2920438ee197b026b137f0729d65acdde39b3e39285fa"} Feb 23 13:31:26.446085 master-0 kubenswrapper[26474]: I0223 13:31:26.446071 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1","Type":"ContainerDied","Data":"e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56"} Feb 23 13:31:26.446085 master-0 kubenswrapper[26474]: I0223 13:31:26.446093 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e761ac9e7a12d0629b4e079d3060b60d663f8ff8bac3499f370884abea4e1c56" Feb 23 13:31:26.465503 master-0 kubenswrapper[26474]: I0223 13:31:26.465437 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:31:26.565042 master-0 kubenswrapper[26474]: I0223 13:31:26.564979 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:26.718292 master-0 kubenswrapper[26474]: I0223 13:31:26.718215 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data\") pod \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " Feb 23 13:31:26.718953 master-0 kubenswrapper[26474]: I0223 13:31:26.718379 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle\") pod \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " Feb 23 13:31:26.718953 master-0 kubenswrapper[26474]: I0223 13:31:26.718526 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs\") pod \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " Feb 23 13:31:26.718953 master-0 kubenswrapper[26474]: I0223 13:31:26.718676 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dr4l\" (UniqueName: \"kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l\") pod \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\" (UID: \"fb56eecb-ab6f-48ef-bb82-ba3a605b72d1\") " Feb 23 13:31:26.720684 master-0 kubenswrapper[26474]: I0223 13:31:26.720574 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs" (OuterVolumeSpecName: "logs") pod "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" (UID: "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:31:26.723037 master-0 kubenswrapper[26474]: I0223 13:31:26.722849 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l" (OuterVolumeSpecName: "kube-api-access-9dr4l") pod "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" (UID: "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1"). InnerVolumeSpecName "kube-api-access-9dr4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:26.754476 master-0 kubenswrapper[26474]: I0223 13:31:26.754416 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" (UID: "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:26.758668 master-0 kubenswrapper[26474]: I0223 13:31:26.758605 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data" (OuterVolumeSpecName: "config-data") pod "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" (UID: "fb56eecb-ab6f-48ef-bb82-ba3a605b72d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:26.814911 master-0 kubenswrapper[26474]: I0223 13:31:26.814819 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7c9bb"] Feb 23 13:31:26.815583 master-0 kubenswrapper[26474]: E0223 13:31:26.815551 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-api" Feb 23 13:31:26.815583 master-0 kubenswrapper[26474]: I0223 13:31:26.815578 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-api" Feb 23 13:31:26.815729 master-0 kubenswrapper[26474]: E0223 13:31:26.815655 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-log" Feb 23 13:31:26.815729 master-0 kubenswrapper[26474]: I0223 13:31:26.815663 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-log" Feb 23 13:31:26.816076 master-0 kubenswrapper[26474]: I0223 13:31:26.815967 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-api" Feb 23 13:31:26.816139 master-0 kubenswrapper[26474]: I0223 13:31:26.816092 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" containerName="nova-api-log" Feb 23 13:31:26.817135 master-0 kubenswrapper[26474]: I0223 13:31:26.817108 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:26.819535 master-0 kubenswrapper[26474]: I0223 13:31:26.819486 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 13:31:26.819687 master-0 kubenswrapper[26474]: I0223 13:31:26.819630 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 13:31:26.821710 master-0 kubenswrapper[26474]: I0223 13:31:26.821669 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dr4l\" (UniqueName: \"kubernetes.io/projected/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-kube-api-access-9dr4l\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:26.821797 master-0 kubenswrapper[26474]: I0223 13:31:26.821710 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:26.821797 master-0 kubenswrapper[26474]: I0223 13:31:26.821729 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:26.821797 master-0 kubenswrapper[26474]: I0223 13:31:26.821741 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:26.826438 master-0 kubenswrapper[26474]: I0223 13:31:26.826381 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-gn8sx"] Feb 23 13:31:26.828020 master-0 kubenswrapper[26474]: I0223 13:31:26.827984 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:26.862540 master-0 kubenswrapper[26474]: I0223 13:31:26.862479 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-gn8sx"] Feb 23 13:31:26.877073 master-0 kubenswrapper[26474]: I0223 13:31:26.876960 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7c9bb"] Feb 23 13:31:26.925402 master-0 kubenswrapper[26474]: I0223 13:31:26.924028 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:26.927305 master-0 kubenswrapper[26474]: I0223 13:31:26.927226 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:26.927476 master-0 kubenswrapper[26474]: I0223 13:31:26.927443 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbmv\" (UniqueName: \"kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:26.927788 master-0 kubenswrapper[26474]: I0223 13:31:26.927711 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.030374 master-0 kubenswrapper[26474]: I0223 13:31:27.030164 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnz5d\" (UniqueName: \"kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.030374 master-0 kubenswrapper[26474]: I0223 13:31:27.030309 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.030677 master-0 kubenswrapper[26474]: I0223 13:31:27.030396 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.030677 master-0 kubenswrapper[26474]: I0223 13:31:27.030450 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.030677 master-0 kubenswrapper[26474]: I0223 13:31:27.030593 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.030677 master-0 kubenswrapper[26474]: I0223 13:31:27.030650 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.031644 master-0 kubenswrapper[26474]: I0223 13:31:27.031204 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.031644 master-0 kubenswrapper[26474]: I0223 13:31:27.031279 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbmv\" (UniqueName: \"kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.034577 master-0 kubenswrapper[26474]: I0223 13:31:27.034509 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.035061 master-0 kubenswrapper[26474]: I0223 13:31:27.034885 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.037443 master-0 kubenswrapper[26474]: I0223 13:31:27.036787 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.052037 master-0 kubenswrapper[26474]: I0223 13:31:27.051978 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbmv\" (UniqueName: \"kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv\") pod \"nova-cell1-cell-mapping-7c9bb\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.134269 master-0 kubenswrapper[26474]: I0223 13:31:27.134199 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.134488 master-0 kubenswrapper[26474]: I0223 13:31:27.134311 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.134488 master-0 kubenswrapper[26474]: I0223 13:31:27.134412 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.134576 master-0 kubenswrapper[26474]: I0223 13:31:27.134540 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnz5d\" (UniqueName: \"kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.138324 master-0 kubenswrapper[26474]: I0223 13:31:27.138289 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.138873 master-0 kubenswrapper[26474]: I0223 13:31:27.138847 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.141245 master-0 kubenswrapper[26474]: I0223 13:31:27.141188 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.171753 master-0 kubenswrapper[26474]: I0223 13:31:27.171670 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:27.175376 master-0 kubenswrapper[26474]: I0223 13:31:27.175305 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnz5d\" (UniqueName: \"kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d\") pod \"nova-cell1-host-discover-gn8sx\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.184711 master-0 kubenswrapper[26474]: I0223 13:31:27.184650 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:27.459604 master-0 kubenswrapper[26474]: I0223 13:31:27.459532 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:27.604494 master-0 kubenswrapper[26474]: I0223 13:31:27.604334 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:27.661680 master-0 kubenswrapper[26474]: I0223 13:31:27.661618 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:27.692611 master-0 kubenswrapper[26474]: I0223 13:31:27.692534 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:27.698531 master-0 kubenswrapper[26474]: I0223 13:31:27.698462 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:27.701824 master-0 kubenswrapper[26474]: I0223 13:31:27.701741 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 13:31:27.702847 master-0 kubenswrapper[26474]: I0223 13:31:27.702798 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 13:31:27.703202 master-0 kubenswrapper[26474]: I0223 13:31:27.703169 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:31:27.714960 master-0 kubenswrapper[26474]: I0223 13:31:27.714720 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:27.739793 master-0 kubenswrapper[26474]: I0223 13:31:27.739736 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7c9bb"] Feb 23 13:31:27.754013 master-0 kubenswrapper[26474]: I0223 13:31:27.753945 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-gn8sx"] Feb 23 13:31:27.760139 master-0 kubenswrapper[26474]: I0223 13:31:27.760022 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.760210 master-0 kubenswrapper[26474]: I0223 13:31:27.760157 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.760259 master-0 kubenswrapper[26474]: I0223 13:31:27.760231 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9w4\" (UniqueName: \"kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.760387 master-0 kubenswrapper[26474]: I0223 13:31:27.760273 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.760387 master-0 kubenswrapper[26474]: I0223 13:31:27.760355 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.760458 master-0 kubenswrapper[26474]: I0223 13:31:27.760443 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.865004 master-0 kubenswrapper[26474]: I0223 13:31:27.864904 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.865180 master-0 kubenswrapper[26474]: I0223 13:31:27.865025 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9w4\" (UniqueName: \"kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.865271 master-0 kubenswrapper[26474]: I0223 13:31:27.865197 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.865271 master-0 kubenswrapper[26474]: I0223 13:31:27.865232 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.866122 master-0 kubenswrapper[26474]: I0223 13:31:27.865388 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.866122 master-0 kubenswrapper[26474]: I0223 13:31:27.865539 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.870627 master-0 kubenswrapper[26474]: I0223 13:31:27.870565 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.872102 master-0 kubenswrapper[26474]: I0223 13:31:27.872017 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.874586 master-0 kubenswrapper[26474]: I0223 13:31:27.874548 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.875094 master-0 kubenswrapper[26474]: I0223 13:31:27.875043 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.878238 master-0 kubenswrapper[26474]: I0223 13:31:27.878181 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:27.883962 master-0 kubenswrapper[26474]: I0223 13:31:27.883901 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9w4\" (UniqueName: \"kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4\") pod \"nova-api-0\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " pod="openstack/nova-api-0" Feb 23 13:31:28.035410 master-0 kubenswrapper[26474]: I0223 13:31:28.035321 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:28.412232 master-0 kubenswrapper[26474]: I0223 13:31:28.412095 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb56eecb-ab6f-48ef-bb82-ba3a605b72d1" path="/var/lib/kubelet/pods/fb56eecb-ab6f-48ef-bb82-ba3a605b72d1/volumes" Feb 23 13:31:28.480515 master-0 kubenswrapper[26474]: I0223 13:31:28.480458 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7c9bb" event={"ID":"f0dc82dc-15d2-4c67-be7c-ce4f798dca77","Type":"ContainerStarted","Data":"c07c0c98c03edec4e0255766eb595fdc792fc2aef8f9880e8b381e4823c764d4"} Feb 23 13:31:28.480999 master-0 kubenswrapper[26474]: I0223 13:31:28.480972 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7c9bb" event={"ID":"f0dc82dc-15d2-4c67-be7c-ce4f798dca77","Type":"ContainerStarted","Data":"50f075d12aa58c3f6dbdcf5cbab7df62fb9f3568f7e4cf617a82cd535e6344f1"} Feb 23 13:31:28.482900 master-0 kubenswrapper[26474]: I0223 13:31:28.482852 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-gn8sx" event={"ID":"222d4bdd-c14f-480f-9a90-c89fc731cf45","Type":"ContainerStarted","Data":"46069e25fc102838c1c9f5c596fea6f8c6614a8c029fda1a9b571cb51aeb2d39"} Feb 23 13:31:28.483031 master-0 kubenswrapper[26474]: I0223 13:31:28.483013 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-gn8sx" event={"ID":"222d4bdd-c14f-480f-9a90-c89fc731cf45","Type":"ContainerStarted","Data":"749cd893fa4ec574481c6c71f94d0ebf39aacfaa04774d58c9b769b7c1c7848a"} Feb 23 13:31:28.515420 master-0 kubenswrapper[26474]: I0223 13:31:28.515294 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7c9bb" podStartSLOduration=2.515273129 podStartE2EDuration="2.515273129s" podCreationTimestamp="2026-02-23 13:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:28.507718565 +0000 UTC m=+1010.354226272" watchObservedRunningTime="2026-02-23 13:31:28.515273129 +0000 UTC m=+1010.361780796" Feb 23 13:31:28.531159 master-0 kubenswrapper[26474]: I0223 13:31:28.531053 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-gn8sx" podStartSLOduration=2.531033183 podStartE2EDuration="2.531033183s" podCreationTimestamp="2026-02-23 13:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:28.527816465 +0000 UTC m=+1010.374324152" watchObservedRunningTime="2026-02-23 13:31:28.531033183 +0000 UTC m=+1010.377540860" Feb 23 13:31:28.591980 master-0 kubenswrapper[26474]: I0223 13:31:28.590760 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:29.507546 master-0 kubenswrapper[26474]: I0223 13:31:29.507469 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerStarted","Data":"3fa61ec45fc04f3170b0bc41e5b5a71ae5d9aeb143891ca4a6a8046f8c617920"} Feb 23 13:31:29.507546 master-0 kubenswrapper[26474]: I0223 13:31:29.507529 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerStarted","Data":"dcfbb177cfa5905f41b0ff0c848f08e880aabafd413f8269817da1537be9b3cd"} Feb 23 13:31:29.507546 master-0 kubenswrapper[26474]: I0223 13:31:29.507540 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerStarted","Data":"4c8cfa518d85559c29eb32a3a4c81e2e788af3191c38ff0bd5408176dc79ed69"} Feb 23 13:31:29.544987 master-0 kubenswrapper[26474]: I0223 13:31:29.544150 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.544130304 podStartE2EDuration="2.544130304s" podCreationTimestamp="2026-02-23 13:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:29.535190306 +0000 UTC m=+1011.381697993" watchObservedRunningTime="2026-02-23 13:31:29.544130304 +0000 UTC m=+1011.390637981" Feb 23 13:31:29.901678 master-0 kubenswrapper[26474]: I0223 13:31:29.901515 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dfc76fc6c-ffppf" Feb 23 13:31:30.213767 master-0 kubenswrapper[26474]: I0223 13:31:30.213599 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:31:30.213975 master-0 kubenswrapper[26474]: I0223 13:31:30.213858 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="dnsmasq-dns" containerID="cri-o://8ffeeaff75e2e370fd0c57caed7e395c21bd3e7e22760a5e8dab8e083213b3f4" gracePeriod=10 Feb 23 13:31:30.529774 master-0 kubenswrapper[26474]: I0223 13:31:30.529707 26474 generic.go:334] "Generic (PLEG): container finished" podID="9998b306-07a1-4bf0-a118-cff08aa88083" containerID="8ffeeaff75e2e370fd0c57caed7e395c21bd3e7e22760a5e8dab8e083213b3f4" exitCode=0 Feb 23 13:31:30.530239 master-0 kubenswrapper[26474]: I0223 13:31:30.529773 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" event={"ID":"9998b306-07a1-4bf0-a118-cff08aa88083","Type":"ContainerDied","Data":"8ffeeaff75e2e370fd0c57caed7e395c21bd3e7e22760a5e8dab8e083213b3f4"} Feb 23 13:31:31.004407 master-0 kubenswrapper[26474]: I0223 13:31:31.004325 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:31:31.164567 master-0 kubenswrapper[26474]: I0223 13:31:31.164393 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.164567 master-0 kubenswrapper[26474]: I0223 13:31:31.164522 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.164835 master-0 kubenswrapper[26474]: I0223 13:31:31.164728 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.164877 master-0 kubenswrapper[26474]: I0223 13:31:31.164835 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77gpt\" (UniqueName: \"kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.164877 master-0 kubenswrapper[26474]: I0223 13:31:31.164868 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.164999 master-0 kubenswrapper[26474]: I0223 13:31:31.164974 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc\") pod \"9998b306-07a1-4bf0-a118-cff08aa88083\" (UID: \"9998b306-07a1-4bf0-a118-cff08aa88083\") " Feb 23 13:31:31.170273 master-0 kubenswrapper[26474]: I0223 13:31:31.170188 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt" (OuterVolumeSpecName: "kube-api-access-77gpt") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "kube-api-access-77gpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:31.254375 master-0 kubenswrapper[26474]: I0223 13:31:31.253204 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:31:31.274449 master-0 kubenswrapper[26474]: I0223 13:31:31.265881 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:31:31.274449 master-0 kubenswrapper[26474]: I0223 13:31:31.267994 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77gpt\" (UniqueName: \"kubernetes.io/projected/9998b306-07a1-4bf0-a118-cff08aa88083-kube-api-access-77gpt\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.274449 master-0 kubenswrapper[26474]: I0223 13:31:31.268014 26474 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.274449 master-0 kubenswrapper[26474]: I0223 13:31:31.268025 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.291753 master-0 kubenswrapper[26474]: I0223 13:31:31.287018 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:31:31.298989 master-0 kubenswrapper[26474]: I0223 13:31:31.298919 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config" (OuterVolumeSpecName: "config") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:31:31.305204 master-0 kubenswrapper[26474]: I0223 13:31:31.305043 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9998b306-07a1-4bf0-a118-cff08aa88083" (UID: "9998b306-07a1-4bf0-a118-cff08aa88083"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:31:31.374070 master-0 kubenswrapper[26474]: I0223 13:31:31.372876 26474 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.374070 master-0 kubenswrapper[26474]: I0223 13:31:31.372956 26474 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.374070 master-0 kubenswrapper[26474]: I0223 13:31:31.372974 26474 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9998b306-07a1-4bf0-a118-cff08aa88083-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:31.543382 master-0 kubenswrapper[26474]: I0223 13:31:31.543309 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" event={"ID":"9998b306-07a1-4bf0-a118-cff08aa88083","Type":"ContainerDied","Data":"170ea072add31bac5f3de14668870c634b74ee26718fd798f34670a1b5d54992"} Feb 23 13:31:31.543962 master-0 kubenswrapper[26474]: I0223 13:31:31.543375 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f67f6fbf-px6zm" Feb 23 13:31:31.543962 master-0 kubenswrapper[26474]: I0223 13:31:31.543482 26474 scope.go:117] "RemoveContainer" containerID="8ffeeaff75e2e370fd0c57caed7e395c21bd3e7e22760a5e8dab8e083213b3f4" Feb 23 13:31:31.546850 master-0 kubenswrapper[26474]: I0223 13:31:31.546768 26474 generic.go:334] "Generic (PLEG): container finished" podID="808fa98d-dace-4799-9059-a26510355d62" containerID="51d5db4b94b2c7a423b364d2268763c5377e0279ac45b0486fa1a58996a9a279" exitCode=0 Feb 23 13:31:31.546947 master-0 kubenswrapper[26474]: I0223 13:31:31.546810 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerDied","Data":"51d5db4b94b2c7a423b364d2268763c5377e0279ac45b0486fa1a58996a9a279"} Feb 23 13:31:31.551109 master-0 kubenswrapper[26474]: I0223 13:31:31.551037 26474 generic.go:334] "Generic (PLEG): container finished" podID="222d4bdd-c14f-480f-9a90-c89fc731cf45" containerID="46069e25fc102838c1c9f5c596fea6f8c6614a8c029fda1a9b571cb51aeb2d39" exitCode=0 Feb 23 13:31:31.551264 master-0 kubenswrapper[26474]: I0223 13:31:31.551111 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-gn8sx" event={"ID":"222d4bdd-c14f-480f-9a90-c89fc731cf45","Type":"ContainerDied","Data":"46069e25fc102838c1c9f5c596fea6f8c6614a8c029fda1a9b571cb51aeb2d39"} Feb 23 13:31:31.590778 master-0 kubenswrapper[26474]: I0223 13:31:31.590033 26474 scope.go:117] "RemoveContainer" containerID="35146bd28b7f5beba352ded5d4c8e300690af88f6910b5cebdf8d091046d866f" Feb 23 13:31:31.857821 master-0 kubenswrapper[26474]: I0223 13:31:31.857752 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:31:31.874651 master-0 kubenswrapper[26474]: I0223 13:31:31.874285 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f67f6fbf-px6zm"] Feb 23 13:31:32.416959 master-0 kubenswrapper[26474]: I0223 13:31:32.416851 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" path="/var/lib/kubelet/pods/9998b306-07a1-4bf0-a118-cff08aa88083/volumes" Feb 23 13:31:32.588525 master-0 kubenswrapper[26474]: I0223 13:31:32.588304 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"3a56fb6bd7c6614486ca141797d54773c6256336b49d26c5bf579a69bb0e454a"} Feb 23 13:31:32.588525 master-0 kubenswrapper[26474]: I0223 13:31:32.588385 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"177ee875daaeebb5dc2b6a920cc6ed213d9d1791a4a40a6b37547d76080759f3"} Feb 23 13:31:33.090272 master-0 kubenswrapper[26474]: I0223 13:31:33.090204 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:33.222988 master-0 kubenswrapper[26474]: I0223 13:31:33.222941 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts\") pod \"222d4bdd-c14f-480f-9a90-c89fc731cf45\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " Feb 23 13:31:33.223271 master-0 kubenswrapper[26474]: I0223 13:31:33.223080 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data\") pod \"222d4bdd-c14f-480f-9a90-c89fc731cf45\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " Feb 23 13:31:33.223843 master-0 kubenswrapper[26474]: I0223 13:31:33.223795 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle\") pod \"222d4bdd-c14f-480f-9a90-c89fc731cf45\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " Feb 23 13:31:33.223943 master-0 kubenswrapper[26474]: I0223 13:31:33.223855 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnz5d\" (UniqueName: \"kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d\") pod \"222d4bdd-c14f-480f-9a90-c89fc731cf45\" (UID: \"222d4bdd-c14f-480f-9a90-c89fc731cf45\") " Feb 23 13:31:33.226438 master-0 kubenswrapper[26474]: I0223 13:31:33.226351 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts" (OuterVolumeSpecName: "scripts") pod "222d4bdd-c14f-480f-9a90-c89fc731cf45" (UID: "222d4bdd-c14f-480f-9a90-c89fc731cf45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:33.228222 master-0 kubenswrapper[26474]: I0223 13:31:33.226916 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d" (OuterVolumeSpecName: "kube-api-access-nnz5d") pod "222d4bdd-c14f-480f-9a90-c89fc731cf45" (UID: "222d4bdd-c14f-480f-9a90-c89fc731cf45"). InnerVolumeSpecName "kube-api-access-nnz5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:33.260602 master-0 kubenswrapper[26474]: I0223 13:31:33.260512 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data" (OuterVolumeSpecName: "config-data") pod "222d4bdd-c14f-480f-9a90-c89fc731cf45" (UID: "222d4bdd-c14f-480f-9a90-c89fc731cf45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:33.266518 master-0 kubenswrapper[26474]: I0223 13:31:33.266440 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "222d4bdd-c14f-480f-9a90-c89fc731cf45" (UID: "222d4bdd-c14f-480f-9a90-c89fc731cf45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:33.327446 master-0 kubenswrapper[26474]: I0223 13:31:33.327314 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:33.327446 master-0 kubenswrapper[26474]: I0223 13:31:33.327400 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnz5d\" (UniqueName: \"kubernetes.io/projected/222d4bdd-c14f-480f-9a90-c89fc731cf45-kube-api-access-nnz5d\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:33.327446 master-0 kubenswrapper[26474]: I0223 13:31:33.327416 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:33.327446 master-0 kubenswrapper[26474]: I0223 13:31:33.327427 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222d4bdd-c14f-480f-9a90-c89fc731cf45-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:33.601469 master-0 kubenswrapper[26474]: I0223 13:31:33.601403 26474 generic.go:334] "Generic (PLEG): container finished" podID="f0dc82dc-15d2-4c67-be7c-ce4f798dca77" containerID="c07c0c98c03edec4e0255766eb595fdc792fc2aef8f9880e8b381e4823c764d4" exitCode=0 Feb 23 13:31:33.601971 master-0 kubenswrapper[26474]: I0223 13:31:33.601488 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7c9bb" event={"ID":"f0dc82dc-15d2-4c67-be7c-ce4f798dca77","Type":"ContainerDied","Data":"c07c0c98c03edec4e0255766eb595fdc792fc2aef8f9880e8b381e4823c764d4"} Feb 23 13:31:33.605277 master-0 kubenswrapper[26474]: I0223 13:31:33.605228 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"808fa98d-dace-4799-9059-a26510355d62","Type":"ContainerStarted","Data":"a8e69a05bdf9f4eee2ddcff65dbde18e79026d6670e5851b6b118e6af659e310"} Feb 23 13:31:33.607245 master-0 kubenswrapper[26474]: I0223 13:31:33.607204 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-gn8sx" event={"ID":"222d4bdd-c14f-480f-9a90-c89fc731cf45","Type":"ContainerDied","Data":"749cd893fa4ec574481c6c71f94d0ebf39aacfaa04774d58c9b769b7c1c7848a"} Feb 23 13:31:33.607245 master-0 kubenswrapper[26474]: I0223 13:31:33.607245 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="749cd893fa4ec574481c6c71f94d0ebf39aacfaa04774d58c9b769b7c1c7848a" Feb 23 13:31:33.607381 master-0 kubenswrapper[26474]: I0223 13:31:33.607262 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-gn8sx" Feb 23 13:31:33.664008 master-0 kubenswrapper[26474]: I0223 13:31:33.663921 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=62.264177328 podStartE2EDuration="2m14.663900117s" podCreationTimestamp="2026-02-23 13:29:19 +0000 UTC" firstStartedPulling="2026-02-23 13:29:30.219138823 +0000 UTC m=+892.065646500" lastFinishedPulling="2026-02-23 13:30:42.618861612 +0000 UTC m=+964.465369289" observedRunningTime="2026-02-23 13:31:33.655268346 +0000 UTC m=+1015.501776053" watchObservedRunningTime="2026-02-23 13:31:33.663900117 +0000 UTC m=+1015.510407804" Feb 23 13:31:34.619528 master-0 kubenswrapper[26474]: I0223 13:31:34.618949 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 23 13:31:34.619528 master-0 kubenswrapper[26474]: I0223 13:31:34.618999 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 23 13:31:35.132012 master-0 kubenswrapper[26474]: I0223 13:31:35.131892 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 23 13:31:35.164580 master-0 kubenswrapper[26474]: I0223 13:31:35.164516 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:35.328925 master-0 kubenswrapper[26474]: I0223 13:31:35.328732 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbmv\" (UniqueName: \"kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv\") pod \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " Feb 23 13:31:35.329141 master-0 kubenswrapper[26474]: I0223 13:31:35.328998 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle\") pod \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " Feb 23 13:31:35.329141 master-0 kubenswrapper[26474]: I0223 13:31:35.329043 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts\") pod \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " Feb 23 13:31:35.329214 master-0 kubenswrapper[26474]: I0223 13:31:35.329163 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data\") pod \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\" (UID: \"f0dc82dc-15d2-4c67-be7c-ce4f798dca77\") " Feb 23 13:31:35.333239 master-0 kubenswrapper[26474]: I0223 13:31:35.333157 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts" (OuterVolumeSpecName: "scripts") pod "f0dc82dc-15d2-4c67-be7c-ce4f798dca77" (UID: "f0dc82dc-15d2-4c67-be7c-ce4f798dca77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:35.333933 master-0 kubenswrapper[26474]: I0223 13:31:35.333733 26474 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-scripts\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:35.346990 master-0 kubenswrapper[26474]: I0223 13:31:35.346910 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv" (OuterVolumeSpecName: "kube-api-access-jzbmv") pod "f0dc82dc-15d2-4c67-be7c-ce4f798dca77" (UID: "f0dc82dc-15d2-4c67-be7c-ce4f798dca77"). InnerVolumeSpecName "kube-api-access-jzbmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:35.368188 master-0 kubenswrapper[26474]: I0223 13:31:35.368083 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0dc82dc-15d2-4c67-be7c-ce4f798dca77" (UID: "f0dc82dc-15d2-4c67-be7c-ce4f798dca77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:35.368500 master-0 kubenswrapper[26474]: I0223 13:31:35.368240 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data" (OuterVolumeSpecName: "config-data") pod "f0dc82dc-15d2-4c67-be7c-ce4f798dca77" (UID: "f0dc82dc-15d2-4c67-be7c-ce4f798dca77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:35.435729 master-0 kubenswrapper[26474]: I0223 13:31:35.435653 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzbmv\" (UniqueName: \"kubernetes.io/projected/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-kube-api-access-jzbmv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:35.435729 master-0 kubenswrapper[26474]: I0223 13:31:35.435719 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:35.435729 master-0 kubenswrapper[26474]: I0223 13:31:35.435731 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dc82dc-15d2-4c67-be7c-ce4f798dca77-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:35.631158 master-0 kubenswrapper[26474]: I0223 13:31:35.631022 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7c9bb" Feb 23 13:31:35.631158 master-0 kubenswrapper[26474]: I0223 13:31:35.631019 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7c9bb" event={"ID":"f0dc82dc-15d2-4c67-be7c-ce4f798dca77","Type":"ContainerDied","Data":"50f075d12aa58c3f6dbdcf5cbab7df62fb9f3568f7e4cf617a82cd535e6344f1"} Feb 23 13:31:35.631717 master-0 kubenswrapper[26474]: I0223 13:31:35.631177 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f075d12aa58c3f6dbdcf5cbab7df62fb9f3568f7e4cf617a82cd535e6344f1" Feb 23 13:31:35.673774 master-0 kubenswrapper[26474]: E0223 13:31:35.673677 26474 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0dc82dc_15d2_4c67_be7c_ce4f798dca77.slice/crio-50f075d12aa58c3f6dbdcf5cbab7df62fb9f3568f7e4cf617a82cd535e6344f1\": RecentStats: unable to find data in memory cache]" Feb 23 13:31:35.871681 master-0 kubenswrapper[26474]: I0223 13:31:35.871617 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:35.871955 master-0 kubenswrapper[26474]: I0223 13:31:35.871860 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerName="nova-scheduler-scheduler" containerID="cri-o://a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" gracePeriod=30 Feb 23 13:31:35.898941 master-0 kubenswrapper[26474]: I0223 13:31:35.898769 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:35.899169 master-0 kubenswrapper[26474]: I0223 13:31:35.899072 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-log" containerID="cri-o://dcfbb177cfa5905f41b0ff0c848f08e880aabafd413f8269817da1537be9b3cd" gracePeriod=30 Feb 23 13:31:35.899251 master-0 kubenswrapper[26474]: I0223 13:31:35.899182 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-api" containerID="cri-o://3fa61ec45fc04f3170b0bc41e5b5a71ae5d9aeb143891ca4a6a8046f8c617920" gracePeriod=30 Feb 23 13:31:35.987840 master-0 kubenswrapper[26474]: I0223 13:31:35.987765 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:35.988188 master-0 kubenswrapper[26474]: I0223 13:31:35.988140 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" containerID="cri-o://3a16901f9d43b9ca07c800b6b2c486dcca7c3ee7411f065ebdb99dc0965d96e7" gracePeriod=30 Feb 23 13:31:35.988656 master-0 kubenswrapper[26474]: I0223 13:31:35.988602 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" containerID="cri-o://cf5631018557db759c7b8824aa3f2c163dd11007f294dce6a6e0c75cc64e9391" gracePeriod=30 Feb 23 13:31:36.642410 master-0 kubenswrapper[26474]: I0223 13:31:36.642306 26474 generic.go:334] "Generic (PLEG): container finished" podID="42877490-484d-4747-9834-c4ab4e701163" containerID="3fa61ec45fc04f3170b0bc41e5b5a71ae5d9aeb143891ca4a6a8046f8c617920" exitCode=0 Feb 23 13:31:36.642410 master-0 kubenswrapper[26474]: I0223 13:31:36.642358 26474 generic.go:334] "Generic (PLEG): container finished" podID="42877490-484d-4747-9834-c4ab4e701163" containerID="dcfbb177cfa5905f41b0ff0c848f08e880aabafd413f8269817da1537be9b3cd" exitCode=143 Feb 23 13:31:36.642410 master-0 kubenswrapper[26474]: I0223 13:31:36.642393 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerDied","Data":"3fa61ec45fc04f3170b0bc41e5b5a71ae5d9aeb143891ca4a6a8046f8c617920"} Feb 23 13:31:36.642410 master-0 kubenswrapper[26474]: I0223 13:31:36.642420 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerDied","Data":"dcfbb177cfa5905f41b0ff0c848f08e880aabafd413f8269817da1537be9b3cd"} Feb 23 13:31:36.642410 master-0 kubenswrapper[26474]: I0223 13:31:36.642430 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42877490-484d-4747-9834-c4ab4e701163","Type":"ContainerDied","Data":"4c8cfa518d85559c29eb32a3a4c81e2e788af3191c38ff0bd5408176dc79ed69"} Feb 23 13:31:36.643182 master-0 kubenswrapper[26474]: I0223 13:31:36.642440 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8cfa518d85559c29eb32a3a4c81e2e788af3191c38ff0bd5408176dc79ed69" Feb 23 13:31:36.644585 master-0 kubenswrapper[26474]: I0223 13:31:36.644546 26474 generic.go:334] "Generic (PLEG): container finished" podID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerID="3a16901f9d43b9ca07c800b6b2c486dcca7c3ee7411f065ebdb99dc0965d96e7" exitCode=143 Feb 23 13:31:36.644802 master-0 kubenswrapper[26474]: I0223 13:31:36.644723 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerDied","Data":"3a16901f9d43b9ca07c800b6b2c486dcca7c3ee7411f065ebdb99dc0965d96e7"} Feb 23 13:31:36.671369 master-0 kubenswrapper[26474]: I0223 13:31:36.662968 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 23 13:31:36.671369 master-0 kubenswrapper[26474]: I0223 13:31:36.665490 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:36.683308 master-0 kubenswrapper[26474]: I0223 13:31:36.683252 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.804381 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.804594 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.805040 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.805172 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.805292 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.805366 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9w4\" (UniqueName: \"kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4\") pod \"42877490-484d-4747-9834-c4ab4e701163\" (UID: \"42877490-484d-4747-9834-c4ab4e701163\") " Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.805930 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs" (OuterVolumeSpecName: "logs") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:31:36.806819 master-0 kubenswrapper[26474]: I0223 13:31:36.806504 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42877490-484d-4747-9834-c4ab4e701163-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:36.815200 master-0 kubenswrapper[26474]: I0223 13:31:36.815129 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4" (OuterVolumeSpecName: "kube-api-access-pb9w4") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "kube-api-access-pb9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:36.836564 master-0 kubenswrapper[26474]: I0223 13:31:36.836494 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36.839954 master-0 kubenswrapper[26474]: I0223 13:31:36.839881 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data" (OuterVolumeSpecName: "config-data") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36.882839 master-0 kubenswrapper[26474]: I0223 13:31:36.882766 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36.903447 master-0 kubenswrapper[26474]: I0223 13:31:36.903375 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "42877490-484d-4747-9834-c4ab4e701163" (UID: "42877490-484d-4747-9834-c4ab4e701163"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36.909249 master-0 kubenswrapper[26474]: I0223 13:31:36.909183 26474 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:36.909249 master-0 kubenswrapper[26474]: I0223 13:31:36.909241 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:36.909249 master-0 kubenswrapper[26474]: I0223 13:31:36.909256 26474 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:36.909453 master-0 kubenswrapper[26474]: I0223 13:31:36.909267 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42877490-484d-4747-9834-c4ab4e701163-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:36.909453 master-0 kubenswrapper[26474]: I0223 13:31:36.909279 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9w4\" (UniqueName: \"kubernetes.io/projected/42877490-484d-4747-9834-c4ab4e701163-kube-api-access-pb9w4\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:37.120778 master-0 kubenswrapper[26474]: E0223 13:31:37.120626 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:31:37.122480 master-0 kubenswrapper[26474]: E0223 13:31:37.122404 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:31:37.124666 master-0 kubenswrapper[26474]: E0223 13:31:37.124579 26474 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:31:37.124741 master-0 kubenswrapper[26474]: E0223 13:31:37.124681 26474 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerName="nova-scheduler-scheduler" Feb 23 13:31:37.654910 master-0 kubenswrapper[26474]: I0223 13:31:37.654840 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:37.658626 master-0 kubenswrapper[26474]: I0223 13:31:37.658582 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 23 13:31:37.757177 master-0 kubenswrapper[26474]: I0223 13:31:37.757100 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:37.783616 master-0 kubenswrapper[26474]: I0223 13:31:37.783528 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:37.794000 master-0 kubenswrapper[26474]: I0223 13:31:37.793926 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:37.794803 master-0 kubenswrapper[26474]: E0223 13:31:37.794772 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="init" Feb 23 13:31:37.794803 master-0 kubenswrapper[26474]: I0223 13:31:37.794800 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="init" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: E0223 13:31:37.794830 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-api" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: I0223 13:31:37.794841 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-api" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: E0223 13:31:37.794856 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="dnsmasq-dns" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: I0223 13:31:37.794865 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="dnsmasq-dns" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: E0223 13:31:37.794887 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-log" Feb 23 13:31:37.794897 master-0 kubenswrapper[26474]: I0223 13:31:37.794894 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-log" Feb 23 13:31:37.795095 master-0 kubenswrapper[26474]: E0223 13:31:37.794926 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222d4bdd-c14f-480f-9a90-c89fc731cf45" containerName="nova-manage" Feb 23 13:31:37.795095 master-0 kubenswrapper[26474]: I0223 13:31:37.794938 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="222d4bdd-c14f-480f-9a90-c89fc731cf45" containerName="nova-manage" Feb 23 13:31:37.795095 master-0 kubenswrapper[26474]: E0223 13:31:37.794969 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dc82dc-15d2-4c67-be7c-ce4f798dca77" containerName="nova-manage" Feb 23 13:31:37.795095 master-0 kubenswrapper[26474]: I0223 13:31:37.794978 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dc82dc-15d2-4c67-be7c-ce4f798dca77" containerName="nova-manage" Feb 23 13:31:37.795321 master-0 kubenswrapper[26474]: I0223 13:31:37.795290 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0dc82dc-15d2-4c67-be7c-ce4f798dca77" containerName="nova-manage" Feb 23 13:31:37.799360 master-0 kubenswrapper[26474]: I0223 13:31:37.795381 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="222d4bdd-c14f-480f-9a90-c89fc731cf45" containerName="nova-manage" Feb 23 13:31:37.799360 master-0 kubenswrapper[26474]: I0223 13:31:37.795414 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-api" Feb 23 13:31:37.799360 master-0 kubenswrapper[26474]: I0223 13:31:37.795431 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="9998b306-07a1-4bf0-a118-cff08aa88083" containerName="dnsmasq-dns" Feb 23 13:31:37.799360 master-0 kubenswrapper[26474]: I0223 13:31:37.795453 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="42877490-484d-4747-9834-c4ab4e701163" containerName="nova-api-log" Feb 23 13:31:37.799360 master-0 kubenswrapper[26474]: I0223 13:31:37.796949 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:37.800262 master-0 kubenswrapper[26474]: I0223 13:31:37.800214 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 13:31:37.800521 master-0 kubenswrapper[26474]: I0223 13:31:37.800494 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:31:37.800666 master-0 kubenswrapper[26474]: I0223 13:31:37.800638 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 13:31:37.812472 master-0 kubenswrapper[26474]: I0223 13:31:37.809063 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:37.938129 master-0 kubenswrapper[26474]: I0223 13:31:37.937810 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmv66\" (UniqueName: \"kubernetes.io/projected/5b0108db-c0d2-418a-a77f-e251a3c6eec4-kube-api-access-mmv66\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:37.938375 master-0 kubenswrapper[26474]: I0223 13:31:37.938139 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-public-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:37.938375 master-0 kubenswrapper[26474]: I0223 13:31:37.938181 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-config-data\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:37.938375 master-0 kubenswrapper[26474]: I0223 13:31:37.938267 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0108db-c0d2-418a-a77f-e251a3c6eec4-logs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:37.938912 master-0 kubenswrapper[26474]: I0223 13:31:37.938850 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:37.939108 master-0 kubenswrapper[26474]: I0223 13:31:37.939089 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.042502 master-0 kubenswrapper[26474]: I0223 13:31:38.041970 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0108db-c0d2-418a-a77f-e251a3c6eec4-logs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.042737 master-0 kubenswrapper[26474]: I0223 13:31:38.042444 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0108db-c0d2-418a-a77f-e251a3c6eec4-logs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.042737 master-0 kubenswrapper[26474]: I0223 13:31:38.042679 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.043205 master-0 kubenswrapper[26474]: I0223 13:31:38.043170 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.043271 master-0 kubenswrapper[26474]: I0223 13:31:38.043252 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmv66\" (UniqueName: \"kubernetes.io/projected/5b0108db-c0d2-418a-a77f-e251a3c6eec4-kube-api-access-mmv66\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.043364 master-0 kubenswrapper[26474]: I0223 13:31:38.043328 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-public-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.043410 master-0 kubenswrapper[26474]: I0223 13:31:38.043377 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-config-data\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.046760 master-0 kubenswrapper[26474]: I0223 13:31:38.046566 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.046760 master-0 kubenswrapper[26474]: I0223 13:31:38.046671 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.047970 master-0 kubenswrapper[26474]: I0223 13:31:38.047936 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-config-data\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.048051 master-0 kubenswrapper[26474]: I0223 13:31:38.047974 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0108db-c0d2-418a-a77f-e251a3c6eec4-public-tls-certs\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.062465 master-0 kubenswrapper[26474]: I0223 13:31:38.062406 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmv66\" (UniqueName: \"kubernetes.io/projected/5b0108db-c0d2-418a-a77f-e251a3c6eec4-kube-api-access-mmv66\") pod \"nova-api-0\" (UID: \"5b0108db-c0d2-418a-a77f-e251a3c6eec4\") " pod="openstack/nova-api-0" Feb 23 13:31:38.138052 master-0 kubenswrapper[26474]: I0223 13:31:38.137968 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:31:38.418227 master-0 kubenswrapper[26474]: I0223 13:31:38.418137 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42877490-484d-4747-9834-c4ab4e701163" path="/var/lib/kubelet/pods/42877490-484d-4747-9834-c4ab4e701163/volumes" Feb 23 13:31:38.600953 master-0 kubenswrapper[26474]: W0223 13:31:38.600862 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b0108db_c0d2_418a_a77f_e251a3c6eec4.slice/crio-3ee5884a7be627b0303e0842a80d41dddca6ee071c9d36837f52e6eab31dadf7 WatchSource:0}: Error finding container 3ee5884a7be627b0303e0842a80d41dddca6ee071c9d36837f52e6eab31dadf7: Status 404 returned error can't find the container with id 3ee5884a7be627b0303e0842a80d41dddca6ee071c9d36837f52e6eab31dadf7 Feb 23 13:31:38.607677 master-0 kubenswrapper[26474]: I0223 13:31:38.607544 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:31:38.672061 master-0 kubenswrapper[26474]: I0223 13:31:38.671989 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5b0108db-c0d2-418a-a77f-e251a3c6eec4","Type":"ContainerStarted","Data":"3ee5884a7be627b0303e0842a80d41dddca6ee071c9d36837f52e6eab31dadf7"} Feb 23 13:31:39.116268 master-0 kubenswrapper[26474]: I0223 13:31:39.116195 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": read tcp 10.128.0.2:43360->10.128.1.7:8775: read: connection reset by peer" Feb 23 13:31:39.116531 master-0 kubenswrapper[26474]: I0223 13:31:39.116311 26474 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": read tcp 10.128.0.2:43368->10.128.1.7:8775: read: connection reset by peer" Feb 23 13:31:39.685397 master-0 kubenswrapper[26474]: I0223 13:31:39.685299 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5b0108db-c0d2-418a-a77f-e251a3c6eec4","Type":"ContainerStarted","Data":"3b6bfa3c71aa95b64fc8bd542c1748edae6b4e526d678a3be3b5d8e9cbb2d7a5"} Feb 23 13:31:39.685397 master-0 kubenswrapper[26474]: I0223 13:31:39.685386 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5b0108db-c0d2-418a-a77f-e251a3c6eec4","Type":"ContainerStarted","Data":"19f45c8eab96aa092f2f714b1bcd0c9352c763fabdf150b187ca10b8b394af18"} Feb 23 13:31:39.688654 master-0 kubenswrapper[26474]: I0223 13:31:39.688604 26474 generic.go:334] "Generic (PLEG): container finished" podID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerID="cf5631018557db759c7b8824aa3f2c163dd11007f294dce6a6e0c75cc64e9391" exitCode=0 Feb 23 13:31:39.688798 master-0 kubenswrapper[26474]: I0223 13:31:39.688687 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerDied","Data":"cf5631018557db759c7b8824aa3f2c163dd11007f294dce6a6e0c75cc64e9391"} Feb 23 13:31:39.688798 master-0 kubenswrapper[26474]: I0223 13:31:39.688747 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84999a2-9c53-4695-9e13-fa5d1f135b9f","Type":"ContainerDied","Data":"58a2c29198caaccbf97e3282e02eab9b8314a0e09736da106c73595467d0ee18"} Feb 23 13:31:39.688798 master-0 kubenswrapper[26474]: I0223 13:31:39.688759 26474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a2c29198caaccbf97e3282e02eab9b8314a0e09736da106c73595467d0ee18" Feb 23 13:31:39.699124 master-0 kubenswrapper[26474]: I0223 13:31:39.699029 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:31:39.713112 master-0 kubenswrapper[26474]: I0223 13:31:39.712828 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.712807085 podStartE2EDuration="2.712807085s" podCreationTimestamp="2026-02-23 13:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:39.705649941 +0000 UTC m=+1021.552157638" watchObservedRunningTime="2026-02-23 13:31:39.712807085 +0000 UTC m=+1021.559314762" Feb 23 13:31:39.821940 master-0 kubenswrapper[26474]: I0223 13:31:39.818464 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsjbp\" (UniqueName: \"kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp\") pod \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " Feb 23 13:31:39.821940 master-0 kubenswrapper[26474]: I0223 13:31:39.818694 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data\") pod \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " Feb 23 13:31:39.821940 master-0 kubenswrapper[26474]: I0223 13:31:39.818750 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs\") pod \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " Feb 23 13:31:39.821940 master-0 kubenswrapper[26474]: I0223 13:31:39.818861 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle\") pod \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " Feb 23 13:31:39.821940 master-0 kubenswrapper[26474]: I0223 13:31:39.818918 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs\") pod \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\" (UID: \"d84999a2-9c53-4695-9e13-fa5d1f135b9f\") " Feb 23 13:31:39.837043 master-0 kubenswrapper[26474]: I0223 13:31:39.829445 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs" (OuterVolumeSpecName: "logs") pod "d84999a2-9c53-4695-9e13-fa5d1f135b9f" (UID: "d84999a2-9c53-4695-9e13-fa5d1f135b9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:31:39.839798 master-0 kubenswrapper[26474]: I0223 13:31:39.839734 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp" (OuterVolumeSpecName: "kube-api-access-gsjbp") pod "d84999a2-9c53-4695-9e13-fa5d1f135b9f" (UID: "d84999a2-9c53-4695-9e13-fa5d1f135b9f"). InnerVolumeSpecName "kube-api-access-gsjbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:39.865464 master-0 kubenswrapper[26474]: I0223 13:31:39.865354 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84999a2-9c53-4695-9e13-fa5d1f135b9f" (UID: "d84999a2-9c53-4695-9e13-fa5d1f135b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:39.898972 master-0 kubenswrapper[26474]: I0223 13:31:39.898898 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data" (OuterVolumeSpecName: "config-data") pod "d84999a2-9c53-4695-9e13-fa5d1f135b9f" (UID: "d84999a2-9c53-4695-9e13-fa5d1f135b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:39.901896 master-0 kubenswrapper[26474]: I0223 13:31:39.901838 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d84999a2-9c53-4695-9e13-fa5d1f135b9f" (UID: "d84999a2-9c53-4695-9e13-fa5d1f135b9f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:39.922970 master-0 kubenswrapper[26474]: I0223 13:31:39.922894 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsjbp\" (UniqueName: \"kubernetes.io/projected/d84999a2-9c53-4695-9e13-fa5d1f135b9f-kube-api-access-gsjbp\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:39.922970 master-0 kubenswrapper[26474]: I0223 13:31:39.922956 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:39.922970 master-0 kubenswrapper[26474]: I0223 13:31:39.922969 26474 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:39.922970 master-0 kubenswrapper[26474]: I0223 13:31:39.922979 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84999a2-9c53-4695-9e13-fa5d1f135b9f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:39.922970 master-0 kubenswrapper[26474]: I0223 13:31:39.922987 26474 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84999a2-9c53-4695-9e13-fa5d1f135b9f-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:40.699372 master-0 kubenswrapper[26474]: I0223 13:31:40.699306 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:31:40.731231 master-0 kubenswrapper[26474]: I0223 13:31:40.731150 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:40.750982 master-0 kubenswrapper[26474]: I0223 13:31:40.750895 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.763292 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: E0223 13:31:40.763895 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.763910 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: E0223 13:31:40.763938 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.763944 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.764199 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-log" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.764292 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" containerName="nova-metadata-metadata" Feb 23 13:31:40.766366 master-0 kubenswrapper[26474]: I0223 13:31:40.765660 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:31:40.769051 master-0 kubenswrapper[26474]: I0223 13:31:40.768993 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 13:31:40.769259 master-0 kubenswrapper[26474]: I0223 13:31:40.769177 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:31:40.776531 master-0 kubenswrapper[26474]: I0223 13:31:40.776434 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:40.949057 master-0 kubenswrapper[26474]: I0223 13:31:40.948996 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4957a00c-e6d5-41c3-b294-65096a96c89d-logs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:40.949275 master-0 kubenswrapper[26474]: I0223 13:31:40.949140 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:40.949275 master-0 kubenswrapper[26474]: I0223 13:31:40.949164 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr7p\" (UniqueName: \"kubernetes.io/projected/4957a00c-e6d5-41c3-b294-65096a96c89d-kube-api-access-4qr7p\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:40.949558 master-0 kubenswrapper[26474]: I0223 13:31:40.949462 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:40.949678 master-0 kubenswrapper[26474]: I0223 13:31:40.949649 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-config-data\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.052518 master-0 kubenswrapper[26474]: I0223 13:31:41.052449 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4957a00c-e6d5-41c3-b294-65096a96c89d-logs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.053104 master-0 kubenswrapper[26474]: I0223 13:31:41.052925 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.053527 master-0 kubenswrapper[26474]: I0223 13:31:41.053008 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4957a00c-e6d5-41c3-b294-65096a96c89d-logs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.053643 master-0 kubenswrapper[26474]: I0223 13:31:41.053623 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr7p\" (UniqueName: \"kubernetes.io/projected/4957a00c-e6d5-41c3-b294-65096a96c89d-kube-api-access-4qr7p\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.053981 master-0 kubenswrapper[26474]: I0223 13:31:41.053961 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.054200 master-0 kubenswrapper[26474]: I0223 13:31:41.054185 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-config-data\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.058987 master-0 kubenswrapper[26474]: I0223 13:31:41.056714 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.058987 master-0 kubenswrapper[26474]: I0223 13:31:41.058652 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-config-data\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.065659 master-0 kubenswrapper[26474]: I0223 13:31:41.062652 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4957a00c-e6d5-41c3-b294-65096a96c89d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.070861 master-0 kubenswrapper[26474]: I0223 13:31:41.070814 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr7p\" (UniqueName: \"kubernetes.io/projected/4957a00c-e6d5-41c3-b294-65096a96c89d-kube-api-access-4qr7p\") pod \"nova-metadata-0\" (UID: \"4957a00c-e6d5-41c3-b294-65096a96c89d\") " pod="openstack/nova-metadata-0" Feb 23 13:31:41.122631 master-0 kubenswrapper[26474]: I0223 13:31:41.122527 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:31:41.514659 master-0 kubenswrapper[26474]: I0223 13:31:41.511576 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:31:41.673914 master-0 kubenswrapper[26474]: I0223 13:31:41.673823 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle\") pod \"37c3e157-78c6-4d54-9bf5-9ed12e893297\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " Feb 23 13:31:41.673914 master-0 kubenswrapper[26474]: I0223 13:31:41.673896 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data\") pod \"37c3e157-78c6-4d54-9bf5-9ed12e893297\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " Feb 23 13:31:41.674371 master-0 kubenswrapper[26474]: I0223 13:31:41.674121 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7mb5\" (UniqueName: \"kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5\") pod \"37c3e157-78c6-4d54-9bf5-9ed12e893297\" (UID: \"37c3e157-78c6-4d54-9bf5-9ed12e893297\") " Feb 23 13:31:41.678266 master-0 kubenswrapper[26474]: I0223 13:31:41.678076 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5" (OuterVolumeSpecName: "kube-api-access-n7mb5") pod "37c3e157-78c6-4d54-9bf5-9ed12e893297" (UID: "37c3e157-78c6-4d54-9bf5-9ed12e893297"). InnerVolumeSpecName "kube-api-access-n7mb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:41.711890 master-0 kubenswrapper[26474]: I0223 13:31:41.711776 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:31:41.716405 master-0 kubenswrapper[26474]: I0223 13:31:41.716142 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:31:41.716405 master-0 kubenswrapper[26474]: I0223 13:31:41.716168 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data" (OuterVolumeSpecName: "config-data") pod "37c3e157-78c6-4d54-9bf5-9ed12e893297" (UID: "37c3e157-78c6-4d54-9bf5-9ed12e893297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:41.716405 master-0 kubenswrapper[26474]: I0223 13:31:41.716191 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37c3e157-78c6-4d54-9bf5-9ed12e893297","Type":"ContainerDied","Data":"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9"} Feb 23 13:31:41.716405 master-0 kubenswrapper[26474]: I0223 13:31:41.716314 26474 scope.go:117] "RemoveContainer" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" Feb 23 13:31:41.716598 master-0 kubenswrapper[26474]: I0223 13:31:41.716068 26474 generic.go:334] "Generic (PLEG): container finished" podID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" exitCode=0 Feb 23 13:31:41.716730 master-0 kubenswrapper[26474]: I0223 13:31:41.716669 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37c3e157-78c6-4d54-9bf5-9ed12e893297","Type":"ContainerDied","Data":"688c5a2145cf75bb4ff92eaeec2bd6cf5e21b03bfea0c45bf3569b2ad5c416e7"} Feb 23 13:31:41.719151 master-0 kubenswrapper[26474]: I0223 13:31:41.719103 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4957a00c-e6d5-41c3-b294-65096a96c89d","Type":"ContainerStarted","Data":"c14637ebdedf7f18b2555bc31e052e3056a16aecac0e78c5de66dbcb94196bf5"} Feb 23 13:31:41.728291 master-0 kubenswrapper[26474]: I0223 13:31:41.728243 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37c3e157-78c6-4d54-9bf5-9ed12e893297" (UID: "37c3e157-78c6-4d54-9bf5-9ed12e893297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:41.777567 master-0 kubenswrapper[26474]: I0223 13:31:41.777406 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7mb5\" (UniqueName: \"kubernetes.io/projected/37c3e157-78c6-4d54-9bf5-9ed12e893297-kube-api-access-n7mb5\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:41.777567 master-0 kubenswrapper[26474]: I0223 13:31:41.777473 26474 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:41.777567 master-0 kubenswrapper[26474]: I0223 13:31:41.777488 26474 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3e157-78c6-4d54-9bf5-9ed12e893297-config-data\") on node \"master-0\" DevicePath \"\"" Feb 23 13:31:41.824067 master-0 kubenswrapper[26474]: I0223 13:31:41.824011 26474 scope.go:117] "RemoveContainer" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" Feb 23 13:31:41.825761 master-0 kubenswrapper[26474]: E0223 13:31:41.825715 26474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9\": container with ID starting with a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9 not found: ID does not exist" containerID="a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9" Feb 23 13:31:41.825841 master-0 kubenswrapper[26474]: I0223 13:31:41.825773 26474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9"} err="failed to get container status \"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9\": rpc error: code = NotFound desc = could not find container \"a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9\": container with ID starting with a275016f27190257bb24ff3c6d2d16a6ae41f1f8091d2b0061c205848057e1b9 not found: ID does not exist" Feb 23 13:31:42.076379 master-0 kubenswrapper[26474]: I0223 13:31:42.072820 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:42.116425 master-0 kubenswrapper[26474]: I0223 13:31:42.114660 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:42.132731 master-0 kubenswrapper[26474]: I0223 13:31:42.131738 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:42.132731 master-0 kubenswrapper[26474]: E0223 13:31:42.132584 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerName="nova-scheduler-scheduler" Feb 23 13:31:42.132731 master-0 kubenswrapper[26474]: I0223 13:31:42.132602 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerName="nova-scheduler-scheduler" Feb 23 13:31:42.133091 master-0 kubenswrapper[26474]: I0223 13:31:42.133068 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" containerName="nova-scheduler-scheduler" Feb 23 13:31:42.134180 master-0 kubenswrapper[26474]: I0223 13:31:42.134143 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:31:42.138098 master-0 kubenswrapper[26474]: I0223 13:31:42.138045 26474 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:31:42.158541 master-0 kubenswrapper[26474]: I0223 13:31:42.158474 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:42.299782 master-0 kubenswrapper[26474]: I0223 13:31:42.299611 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6z8\" (UniqueName: \"kubernetes.io/projected/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-kube-api-access-bz6z8\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.300000 master-0 kubenswrapper[26474]: I0223 13:31:42.299823 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.300199 master-0 kubenswrapper[26474]: I0223 13:31:42.300149 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-config-data\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.403382 master-0 kubenswrapper[26474]: I0223 13:31:42.403295 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-config-data\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.403593 master-0 kubenswrapper[26474]: I0223 13:31:42.403511 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6z8\" (UniqueName: \"kubernetes.io/projected/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-kube-api-access-bz6z8\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.403674 master-0 kubenswrapper[26474]: I0223 13:31:42.403645 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.407929 master-0 kubenswrapper[26474]: I0223 13:31:42.407891 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.408851 master-0 kubenswrapper[26474]: I0223 13:31:42.408812 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-config-data\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.409172 master-0 kubenswrapper[26474]: I0223 13:31:42.409128 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c3e157-78c6-4d54-9bf5-9ed12e893297" path="/var/lib/kubelet/pods/37c3e157-78c6-4d54-9bf5-9ed12e893297/volumes" Feb 23 13:31:42.411661 master-0 kubenswrapper[26474]: I0223 13:31:42.409807 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84999a2-9c53-4695-9e13-fa5d1f135b9f" path="/var/lib/kubelet/pods/d84999a2-9c53-4695-9e13-fa5d1f135b9f/volumes" Feb 23 13:31:42.419951 master-0 kubenswrapper[26474]: I0223 13:31:42.419879 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6z8\" (UniqueName: \"kubernetes.io/projected/56d49a97-e7b2-4dbe-a0b1-6aedcace0d27-kube-api-access-bz6z8\") pod \"nova-scheduler-0\" (UID: \"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27\") " pod="openstack/nova-scheduler-0" Feb 23 13:31:42.465331 master-0 kubenswrapper[26474]: I0223 13:31:42.465245 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:31:42.736173 master-0 kubenswrapper[26474]: I0223 13:31:42.736105 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4957a00c-e6d5-41c3-b294-65096a96c89d","Type":"ContainerStarted","Data":"e5e0166698d5c5089da77808598041e2fc6d33680608d8cbce209e3b3f62a790"} Feb 23 13:31:42.736721 master-0 kubenswrapper[26474]: I0223 13:31:42.736183 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4957a00c-e6d5-41c3-b294-65096a96c89d","Type":"ContainerStarted","Data":"c0801188d2a672d99d33add85abd0cce1972ffb84def171fefe0ab570d9dedd3"} Feb 23 13:31:42.763116 master-0 kubenswrapper[26474]: I0223 13:31:42.763030 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.763011052 podStartE2EDuration="2.763011052s" podCreationTimestamp="2026-02-23 13:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:42.758740477 +0000 UTC m=+1024.605248144" watchObservedRunningTime="2026-02-23 13:31:42.763011052 +0000 UTC m=+1024.609518729" Feb 23 13:31:42.977118 master-0 kubenswrapper[26474]: I0223 13:31:42.977040 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:31:42.981900 master-0 kubenswrapper[26474]: W0223 13:31:42.981838 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56d49a97_e7b2_4dbe_a0b1_6aedcace0d27.slice/crio-9b84471783f394235f3d544cd297a447d767c72a5f8af5cdbc6ef7f1cbba14aa WatchSource:0}: Error finding container 9b84471783f394235f3d544cd297a447d767c72a5f8af5cdbc6ef7f1cbba14aa: Status 404 returned error can't find the container with id 9b84471783f394235f3d544cd297a447d767c72a5f8af5cdbc6ef7f1cbba14aa Feb 23 13:31:43.753163 master-0 kubenswrapper[26474]: I0223 13:31:43.753024 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27","Type":"ContainerStarted","Data":"8b0c1f7a7d8eaeb45b404a11dce6c46748f091e48b87b4bbd327ac7788144d42"} Feb 23 13:31:43.753163 master-0 kubenswrapper[26474]: I0223 13:31:43.753133 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56d49a97-e7b2-4dbe-a0b1-6aedcace0d27","Type":"ContainerStarted","Data":"9b84471783f394235f3d544cd297a447d767c72a5f8af5cdbc6ef7f1cbba14aa"} Feb 23 13:31:43.783903 master-0 kubenswrapper[26474]: I0223 13:31:43.783788 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.783769018 podStartE2EDuration="1.783769018s" podCreationTimestamp="2026-02-23 13:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:43.776941812 +0000 UTC m=+1025.623449499" watchObservedRunningTime="2026-02-23 13:31:43.783769018 +0000 UTC m=+1025.630276695" Feb 23 13:31:46.123083 master-0 kubenswrapper[26474]: I0223 13:31:46.123000 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:31:46.123083 master-0 kubenswrapper[26474]: I0223 13:31:46.123076 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:31:47.466410 master-0 kubenswrapper[26474]: I0223 13:31:47.466334 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:31:48.138808 master-0 kubenswrapper[26474]: I0223 13:31:48.138739 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:31:48.138808 master-0 kubenswrapper[26474]: I0223 13:31:48.138809 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:31:49.164368 master-0 kubenswrapper[26474]: I0223 13:31:49.160645 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5b0108db-c0d2-418a-a77f-e251a3c6eec4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:49.164368 master-0 kubenswrapper[26474]: I0223 13:31:49.160981 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5b0108db-c0d2-418a-a77f-e251a3c6eec4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:51.123691 master-0 kubenswrapper[26474]: I0223 13:31:51.123500 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:31:51.123691 master-0 kubenswrapper[26474]: I0223 13:31:51.123605 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:31:52.138525 master-0 kubenswrapper[26474]: I0223 13:31:52.138448 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4957a00c-e6d5-41c3-b294-65096a96c89d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:52.139115 master-0 kubenswrapper[26474]: I0223 13:31:52.138547 26474 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4957a00c-e6d5-41c3-b294-65096a96c89d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:31:52.465595 master-0 kubenswrapper[26474]: I0223 13:31:52.465521 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:31:52.497180 master-0 kubenswrapper[26474]: I0223 13:31:52.497126 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:31:52.905487 master-0 kubenswrapper[26474]: I0223 13:31:52.905435 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:31:58.146173 master-0 kubenswrapper[26474]: I0223 13:31:58.145096 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:31:58.146173 master-0 kubenswrapper[26474]: I0223 13:31:58.145200 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:31:58.146173 master-0 kubenswrapper[26474]: I0223 13:31:58.145796 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:31:58.146173 master-0 kubenswrapper[26474]: I0223 13:31:58.145854 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:31:58.152487 master-0 kubenswrapper[26474]: I0223 13:31:58.152423 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:31:58.152618 master-0 kubenswrapper[26474]: I0223 13:31:58.152502 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:32:01.128889 master-0 kubenswrapper[26474]: I0223 13:32:01.128808 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:32:01.129554 master-0 kubenswrapper[26474]: I0223 13:32:01.128923 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:32:01.133047 master-0 kubenswrapper[26474]: I0223 13:32:01.132916 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:32:01.135880 master-0 kubenswrapper[26474]: I0223 13:32:01.135838 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:32:27.977452 master-0 kubenswrapper[26474]: I0223 13:32:27.977379 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:32:27.978101 master-0 kubenswrapper[26474]: I0223 13:32:27.977597 26474 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" podUID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" containerName="sushy-emulator" containerID="cri-o://1199bc619f5c722ce342004dcb78f2414d734b39dce3ca2ec709699b56716ff2" gracePeriod=30 Feb 23 13:32:28.292878 master-0 kubenswrapper[26474]: I0223 13:32:28.292809 26474 generic.go:334] "Generic (PLEG): container finished" podID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" containerID="1199bc619f5c722ce342004dcb78f2414d734b39dce3ca2ec709699b56716ff2" exitCode=0 Feb 23 13:32:28.293137 master-0 kubenswrapper[26474]: I0223 13:32:28.292875 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" event={"ID":"16b626ff-7dab-4ad6-9ad8-d639af21bedc","Type":"ContainerDied","Data":"1199bc619f5c722ce342004dcb78f2414d734b39dce3ca2ec709699b56716ff2"} Feb 23 13:32:28.626462 master-0 kubenswrapper[26474]: I0223 13:32:28.626390 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:32:28.723559 master-0 kubenswrapper[26474]: I0223 13:32:28.723465 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config\") pod \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " Feb 23 13:32:28.723986 master-0 kubenswrapper[26474]: I0223 13:32:28.723795 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsrbf\" (UniqueName: \"kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf\") pod \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " Feb 23 13:32:28.723986 master-0 kubenswrapper[26474]: I0223 13:32:28.723947 26474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config\") pod \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\" (UID: \"16b626ff-7dab-4ad6-9ad8-d639af21bedc\") " Feb 23 13:32:28.736809 master-0 kubenswrapper[26474]: I0223 13:32:28.724841 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "16b626ff-7dab-4ad6-9ad8-d639af21bedc" (UID: "16b626ff-7dab-4ad6-9ad8-d639af21bedc"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:32:28.736809 master-0 kubenswrapper[26474]: I0223 13:32:28.734466 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf" (OuterVolumeSpecName: "kube-api-access-jsrbf") pod "16b626ff-7dab-4ad6-9ad8-d639af21bedc" (UID: "16b626ff-7dab-4ad6-9ad8-d639af21bedc"). InnerVolumeSpecName "kube-api-access-jsrbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:32:28.742615 master-0 kubenswrapper[26474]: I0223 13:32:28.742412 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-8jchk"] Feb 23 13:32:28.754475 master-0 kubenswrapper[26474]: E0223 13:32:28.742977 26474 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" containerName="sushy-emulator" Feb 23 13:32:28.754475 master-0 kubenswrapper[26474]: I0223 13:32:28.742998 26474 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" containerName="sushy-emulator" Feb 23 13:32:28.754475 master-0 kubenswrapper[26474]: I0223 13:32:28.743264 26474 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" containerName="sushy-emulator" Feb 23 13:32:28.754475 master-0 kubenswrapper[26474]: I0223 13:32:28.744178 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.754475 master-0 kubenswrapper[26474]: I0223 13:32:28.748942 26474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "16b626ff-7dab-4ad6-9ad8-d639af21bedc" (UID: "16b626ff-7dab-4ad6-9ad8-d639af21bedc"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:32:28.757554 master-0 kubenswrapper[26474]: I0223 13:32:28.757496 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-8jchk"] Feb 23 13:32:28.829152 master-0 kubenswrapper[26474]: I0223 13:32:28.828871 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/71c34779-a19a-4513-b73e-b2d771cc0091-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.829430 master-0 kubenswrapper[26474]: I0223 13:32:28.829196 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/71c34779-a19a-4513-b73e-b2d771cc0091-os-client-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.829878 master-0 kubenswrapper[26474]: I0223 13:32:28.829837 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6r7\" (UniqueName: \"kubernetes.io/projected/71c34779-a19a-4513-b73e-b2d771cc0091-kube-api-access-kg6r7\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.831010 master-0 kubenswrapper[26474]: I0223 13:32:28.830311 26474 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/16b626ff-7dab-4ad6-9ad8-d639af21bedc-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:32:28.831010 master-0 kubenswrapper[26474]: I0223 13:32:28.830376 26474 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/16b626ff-7dab-4ad6-9ad8-d639af21bedc-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:32:28.831010 master-0 kubenswrapper[26474]: I0223 13:32:28.830396 26474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsrbf\" (UniqueName: \"kubernetes.io/projected/16b626ff-7dab-4ad6-9ad8-d639af21bedc-kube-api-access-jsrbf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:32:28.933226 master-0 kubenswrapper[26474]: I0223 13:32:28.933159 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6r7\" (UniqueName: \"kubernetes.io/projected/71c34779-a19a-4513-b73e-b2d771cc0091-kube-api-access-kg6r7\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.933449 master-0 kubenswrapper[26474]: I0223 13:32:28.933315 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/71c34779-a19a-4513-b73e-b2d771cc0091-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.933449 master-0 kubenswrapper[26474]: I0223 13:32:28.933392 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/71c34779-a19a-4513-b73e-b2d771cc0091-os-client-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.938209 master-0 kubenswrapper[26474]: I0223 13:32:28.936813 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/71c34779-a19a-4513-b73e-b2d771cc0091-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.938209 master-0 kubenswrapper[26474]: I0223 13:32:28.938116 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/71c34779-a19a-4513-b73e-b2d771cc0091-os-client-config\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:28.951686 master-0 kubenswrapper[26474]: I0223 13:32:28.949831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6r7\" (UniqueName: \"kubernetes.io/projected/71c34779-a19a-4513-b73e-b2d771cc0091-kube-api-access-kg6r7\") pod \"sushy-emulator-84965d5d88-8jchk\" (UID: \"71c34779-a19a-4513-b73e-b2d771cc0091\") " pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:29.147440 master-0 kubenswrapper[26474]: I0223 13:32:29.147195 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:29.310971 master-0 kubenswrapper[26474]: I0223 13:32:29.310874 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" event={"ID":"16b626ff-7dab-4ad6-9ad8-d639af21bedc","Type":"ContainerDied","Data":"cc54ed9d1faac2d7fcd22c7c5b6df33729422c37f5b5268f14d156941cce777d"} Feb 23 13:32:29.310971 master-0 kubenswrapper[26474]: I0223 13:32:29.310947 26474 scope.go:117] "RemoveContainer" containerID="1199bc619f5c722ce342004dcb78f2414d734b39dce3ca2ec709699b56716ff2" Feb 23 13:32:29.311559 master-0 kubenswrapper[26474]: I0223 13:32:29.311090 26474 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-wtnqn" Feb 23 13:32:29.415090 master-0 kubenswrapper[26474]: I0223 13:32:29.414854 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:32:29.429578 master-0 kubenswrapper[26474]: I0223 13:32:29.429481 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-wtnqn"] Feb 23 13:32:29.737742 master-0 kubenswrapper[26474]: W0223 13:32:29.737668 26474 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c34779_a19a_4513_b73e_b2d771cc0091.slice/crio-8646dff1d98ff245fa2f2bc722bb841b872a53555ac591af4c40b194df1b25aa WatchSource:0}: Error finding container 8646dff1d98ff245fa2f2bc722bb841b872a53555ac591af4c40b194df1b25aa: Status 404 returned error can't find the container with id 8646dff1d98ff245fa2f2bc722bb841b872a53555ac591af4c40b194df1b25aa Feb 23 13:32:29.742000 master-0 kubenswrapper[26474]: I0223 13:32:29.741952 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-8jchk"] Feb 23 13:32:30.327745 master-0 kubenswrapper[26474]: I0223 13:32:30.327689 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" event={"ID":"71c34779-a19a-4513-b73e-b2d771cc0091","Type":"ContainerStarted","Data":"bd98ccf750890c3c3253116936c1ff510324089f11caca3cb5e0bfd40b46c0d4"} Feb 23 13:32:30.327745 master-0 kubenswrapper[26474]: I0223 13:32:30.327746 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" event={"ID":"71c34779-a19a-4513-b73e-b2d771cc0091","Type":"ContainerStarted","Data":"8646dff1d98ff245fa2f2bc722bb841b872a53555ac591af4c40b194df1b25aa"} Feb 23 13:32:30.370512 master-0 kubenswrapper[26474]: I0223 13:32:30.368691 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" podStartSLOduration=2.368663282 podStartE2EDuration="2.368663282s" podCreationTimestamp="2026-02-23 13:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:32:30.350092988 +0000 UTC m=+1072.196600665" watchObservedRunningTime="2026-02-23 13:32:30.368663282 +0000 UTC m=+1072.215170959" Feb 23 13:32:30.412834 master-0 kubenswrapper[26474]: I0223 13:32:30.412775 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b626ff-7dab-4ad6-9ad8-d639af21bedc" path="/var/lib/kubelet/pods/16b626ff-7dab-4ad6-9ad8-d639af21bedc/volumes" Feb 23 13:32:39.148164 master-0 kubenswrapper[26474]: I0223 13:32:39.148094 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:39.149493 master-0 kubenswrapper[26474]: I0223 13:32:39.149459 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:39.158751 master-0 kubenswrapper[26474]: I0223 13:32:39.158640 26474 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:32:39.492570 master-0 kubenswrapper[26474]: I0223 13:32:39.492387 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-8jchk" Feb 23 13:33:51.392192 master-0 kubenswrapper[26474]: I0223 13:33:51.392098 26474 scope.go:117] "RemoveContainer" containerID="b48c02425a759e3b577ebfca42c7552b279e1fa35569f1c8173cec14f819bc4a" Feb 23 13:33:51.442995 master-0 kubenswrapper[26474]: I0223 13:33:51.442889 26474 scope.go:117] "RemoveContainer" containerID="7e1b1b499cb3fd72bab3d32e99cae3503c24c12b065d2a5acb74cfdad1bbc1c4" Feb 23 13:34:51.690150 master-0 kubenswrapper[26474]: I0223 13:34:51.689986 26474 scope.go:117] "RemoveContainer" containerID="77e045467cf762d3952cffb764755f74a6a6488fa78d84b6a9f31be34a7c7ba8" Feb 23 13:34:51.723848 master-0 kubenswrapper[26474]: I0223 13:34:51.723781 26474 scope.go:117] "RemoveContainer" containerID="ccabe7e44c803917cdf2273fbfb9b5d2534d44f374144b586f0bb64313d108e9" Feb 23 13:34:51.803942 master-0 kubenswrapper[26474]: I0223 13:34:51.803874 26474 scope.go:117] "RemoveContainer" containerID="d3cf12cf18cf168968de06a70ee50280cba60cef96bfb312ac9f806338aaad18" Feb 23 13:36:51.973483 master-0 kubenswrapper[26474]: I0223 13:36:51.973413 26474 scope.go:117] "RemoveContainer" containerID="42a9d2820134e9ed8475389d1474adb5fd65e3b68b4b9165741bd1c2dad925dc" Feb 23 13:36:51.998447 master-0 kubenswrapper[26474]: I0223 13:36:51.998383 26474 scope.go:117] "RemoveContainer" containerID="4ea7d16cbc076f912a88373b7e88635e6072ea1c9fcebc2ed0c0504c339455c7" Feb 23 13:37:40.781575 master-0 kubenswrapper[26474]: I0223 13:37:40.781476 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6l6d2/must-gather-v9td5"] Feb 23 13:37:40.784098 master-0 kubenswrapper[26474]: I0223 13:37:40.784045 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.786087 master-0 kubenswrapper[26474]: I0223 13:37:40.786038 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6l6d2"/"openshift-service-ca.crt" Feb 23 13:37:40.786538 master-0 kubenswrapper[26474]: I0223 13:37:40.786491 26474 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6l6d2"/"kube-root-ca.crt" Feb 23 13:37:40.800149 master-0 kubenswrapper[26474]: I0223 13:37:40.800073 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6l6d2/must-gather-k2kcl"] Feb 23 13:37:40.802258 master-0 kubenswrapper[26474]: I0223 13:37:40.802218 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.819440 master-0 kubenswrapper[26474]: I0223 13:37:40.819306 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/must-gather-v9td5"] Feb 23 13:37:40.845369 master-0 kubenswrapper[26474]: I0223 13:37:40.835585 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/must-gather-k2kcl"] Feb 23 13:37:40.852559 master-0 kubenswrapper[26474]: I0223 13:37:40.851653 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxx4\" (UniqueName: \"kubernetes.io/projected/83bc4727-92c1-4379-854c-779000ec779a-kube-api-access-mxxx4\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.852559 master-0 kubenswrapper[26474]: I0223 13:37:40.851763 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-must-gather-output\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.852559 master-0 kubenswrapper[26474]: I0223 13:37:40.851944 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-kube-api-access-q8dqt\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.852559 master-0 kubenswrapper[26474]: I0223 13:37:40.852004 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83bc4727-92c1-4379-854c-779000ec779a-must-gather-output\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.953809 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-must-gather-output\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.954011 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-kube-api-access-q8dqt\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.954071 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83bc4727-92c1-4379-854c-779000ec779a-must-gather-output\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.954170 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxx4\" (UniqueName: \"kubernetes.io/projected/83bc4727-92c1-4379-854c-779000ec779a-kube-api-access-mxxx4\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.954500 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-must-gather-output\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:40.960364 master-0 kubenswrapper[26474]: I0223 13:37:40.956028 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83bc4727-92c1-4379-854c-779000ec779a-must-gather-output\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.978448 master-0 kubenswrapper[26474]: I0223 13:37:40.971884 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxx4\" (UniqueName: \"kubernetes.io/projected/83bc4727-92c1-4379-854c-779000ec779a-kube-api-access-mxxx4\") pod \"must-gather-k2kcl\" (UID: \"83bc4727-92c1-4379-854c-779000ec779a\") " pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:40.978448 master-0 kubenswrapper[26474]: I0223 13:37:40.973035 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/bcb5f4c0-2acd-4936-b8e6-9709474d2f3d-kube-api-access-q8dqt\") pod \"must-gather-v9td5\" (UID: \"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d\") " pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:41.116569 master-0 kubenswrapper[26474]: I0223 13:37:41.116407 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/must-gather-v9td5" Feb 23 13:37:41.132158 master-0 kubenswrapper[26474]: I0223 13:37:41.132100 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" Feb 23 13:37:41.579227 master-0 kubenswrapper[26474]: I0223 13:37:41.579169 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/must-gather-v9td5"] Feb 23 13:37:41.580660 master-0 kubenswrapper[26474]: I0223 13:37:41.580626 26474 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:37:41.673111 master-0 kubenswrapper[26474]: I0223 13:37:41.667940 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/must-gather-k2kcl"] Feb 23 13:37:42.544661 master-0 kubenswrapper[26474]: I0223 13:37:42.544495 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" event={"ID":"83bc4727-92c1-4379-854c-779000ec779a","Type":"ContainerStarted","Data":"d5d54cec3f763ec2ab9250910ba968d92778c3804da042ff4f7ba7aaf2f6b1cf"} Feb 23 13:37:42.548644 master-0 kubenswrapper[26474]: I0223 13:37:42.548563 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-v9td5" event={"ID":"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d","Type":"ContainerStarted","Data":"dc3faa8adf5222c1bde0ccfaf0f817d04b88b1c00ef5a026134b26cf31c802fa"} Feb 23 13:37:43.565575 master-0 kubenswrapper[26474]: I0223 13:37:43.565493 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" event={"ID":"83bc4727-92c1-4379-854c-779000ec779a","Type":"ContainerStarted","Data":"b3023365a2a20f11f7d08b25a339a73456a05b6fd0971ae0980e5c2dab9bb166"} Feb 23 13:37:43.565575 master-0 kubenswrapper[26474]: I0223 13:37:43.565567 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" event={"ID":"83bc4727-92c1-4379-854c-779000ec779a","Type":"ContainerStarted","Data":"8b2a0bc3bd3a603b85ee2c73e2e7949cc6af058aed80102818df3832e19ddaee"} Feb 23 13:37:43.599242 master-0 kubenswrapper[26474]: I0223 13:37:43.599061 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6l6d2/must-gather-k2kcl" podStartSLOduration=2.449827366 podStartE2EDuration="3.599041153s" podCreationTimestamp="2026-02-23 13:37:40 +0000 UTC" firstStartedPulling="2026-02-23 13:37:41.676971481 +0000 UTC m=+1383.523479168" lastFinishedPulling="2026-02-23 13:37:42.826185278 +0000 UTC m=+1384.672692955" observedRunningTime="2026-02-23 13:37:43.585971964 +0000 UTC m=+1385.432479641" watchObservedRunningTime="2026-02-23 13:37:43.599041153 +0000 UTC m=+1385.445548830" Feb 23 13:37:44.456395 master-0 kubenswrapper[26474]: I0223 13:37:44.456336 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-8jbxf_24d878bd-05cd-414e-94c1-a3e9ce637331/cluster-version-operator/0.log" Feb 23 13:37:45.092807 master-0 kubenswrapper[26474]: I0223 13:37:45.092693 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nbjdj"] Feb 23 13:37:45.104378 master-0 kubenswrapper[26474]: I0223 13:37:45.104312 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6wgzv"] Feb 23 13:37:45.121308 master-0 kubenswrapper[26474]: I0223 13:37:45.121035 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1b01-account-create-update-57vw2"] Feb 23 13:37:45.138327 master-0 kubenswrapper[26474]: I0223 13:37:45.136793 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2a9d-account-create-update-2kcdv"] Feb 23 13:37:45.150628 master-0 kubenswrapper[26474]: I0223 13:37:45.150572 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nbjdj"] Feb 23 13:37:45.164627 master-0 kubenswrapper[26474]: I0223 13:37:45.164581 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6wgzv"] Feb 23 13:37:45.179591 master-0 kubenswrapper[26474]: I0223 13:37:45.179540 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1b01-account-create-update-57vw2"] Feb 23 13:37:45.196019 master-0 kubenswrapper[26474]: I0223 13:37:45.195960 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2a9d-account-create-update-2kcdv"] Feb 23 13:37:45.686281 master-0 kubenswrapper[26474]: I0223 13:37:45.686212 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-8jbxf_24d878bd-05cd-414e-94c1-a3e9ce637331/cluster-version-operator/1.log" Feb 23 13:37:46.069008 master-0 kubenswrapper[26474]: I0223 13:37:46.068939 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9kw5h"] Feb 23 13:37:46.087268 master-0 kubenswrapper[26474]: I0223 13:37:46.086966 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ea4f-account-create-update-rdkts"] Feb 23 13:37:46.103842 master-0 kubenswrapper[26474]: I0223 13:37:46.103780 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9kw5h"] Feb 23 13:37:46.117031 master-0 kubenswrapper[26474]: I0223 13:37:46.116964 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ea4f-account-create-update-rdkts"] Feb 23 13:37:46.408297 master-0 kubenswrapper[26474]: I0223 13:37:46.408110 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eea86bf-8169-406d-bd43-b83c6d53f41f" path="/var/lib/kubelet/pods/0eea86bf-8169-406d-bd43-b83c6d53f41f/volumes" Feb 23 13:37:46.409275 master-0 kubenswrapper[26474]: I0223 13:37:46.409259 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e097639-7f3b-414c-bbdc-a41202715f31" path="/var/lib/kubelet/pods/1e097639-7f3b-414c-bbdc-a41202715f31/volumes" Feb 23 13:37:46.410761 master-0 kubenswrapper[26474]: I0223 13:37:46.410743 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a010b6-e0ac-42be-aa66-80acd726f647" path="/var/lib/kubelet/pods/53a010b6-e0ac-42be-aa66-80acd726f647/volumes" Feb 23 13:37:46.412747 master-0 kubenswrapper[26474]: I0223 13:37:46.412729 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603694e6-de5e-4098-8271-af0f6ae0f5a5" path="/var/lib/kubelet/pods/603694e6-de5e-4098-8271-af0f6ae0f5a5/volumes" Feb 23 13:37:46.413894 master-0 kubenswrapper[26474]: I0223 13:37:46.413877 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec78070-f90d-43c3-b5dc-2b05b570d739" path="/var/lib/kubelet/pods/bec78070-f90d-43c3-b5dc-2b05b570d739/volumes" Feb 23 13:37:46.414540 master-0 kubenswrapper[26474]: I0223 13:37:46.414524 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e049806d-aa21-4102-8583-a142a2f80c58" path="/var/lib/kubelet/pods/e049806d-aa21-4102-8583-a142a2f80c58/volumes" Feb 23 13:37:47.983084 master-0 kubenswrapper[26474]: I0223 13:37:47.983045 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-4j4d8_1ab1d91b-91b5-47ce-aefb-b42c7e651fdd/nmstate-console-plugin/0.log" Feb 23 13:37:48.137592 master-0 kubenswrapper[26474]: I0223 13:37:48.137554 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lsm7x_09806916-edb0-4917-a1c1-73690ff280cd/nmstate-handler/0.log" Feb 23 13:37:48.154236 master-0 kubenswrapper[26474]: I0223 13:37:48.154173 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4nr9t_b242a27c-8dd3-43ee-af46-4dd04ffd1cce/nmstate-metrics/0.log" Feb 23 13:37:48.163767 master-0 kubenswrapper[26474]: I0223 13:37:48.163695 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4nr9t_b242a27c-8dd3-43ee-af46-4dd04ffd1cce/kube-rbac-proxy/0.log" Feb 23 13:37:48.188451 master-0 kubenswrapper[26474]: I0223 13:37:48.184436 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-bpgsl_cd299e55-d6cc-4482-937f-5c4c9248b7d6/nmstate-operator/0.log" Feb 23 13:37:48.213732 master-0 kubenswrapper[26474]: I0223 13:37:48.213678 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-qf29q_897817d0-59c1-45ca-afe1-9d525ad217d4/nmstate-webhook/0.log" Feb 23 13:37:48.791639 master-0 kubenswrapper[26474]: I0223 13:37:48.789021 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/controller/0.log" Feb 23 13:37:48.801821 master-0 kubenswrapper[26474]: I0223 13:37:48.801757 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/kube-rbac-proxy/0.log" Feb 23 13:37:48.890145 master-0 kubenswrapper[26474]: I0223 13:37:48.890104 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/controller/0.log" Feb 23 13:37:50.072450 master-0 kubenswrapper[26474]: I0223 13:37:50.072374 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr/0.log" Feb 23 13:37:50.081285 master-0 kubenswrapper[26474]: I0223 13:37:50.081230 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/reloader/0.log" Feb 23 13:37:50.090523 master-0 kubenswrapper[26474]: I0223 13:37:50.090486 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr-metrics/0.log" Feb 23 13:37:50.101727 master-0 kubenswrapper[26474]: I0223 13:37:50.101658 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy/0.log" Feb 23 13:37:50.110407 master-0 kubenswrapper[26474]: I0223 13:37:50.110377 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy-frr/0.log" Feb 23 13:37:50.121061 master-0 kubenswrapper[26474]: I0223 13:37:50.121020 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-frr-files/0.log" Feb 23 13:37:50.128154 master-0 kubenswrapper[26474]: I0223 13:37:50.128131 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-reloader/0.log" Feb 23 13:37:50.141254 master-0 kubenswrapper[26474]: I0223 13:37:50.141176 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-metrics/0.log" Feb 23 13:37:50.156388 master-0 kubenswrapper[26474]: I0223 13:37:50.153565 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-rmg9d_7d38b74b-692d-400d-9cb0-bdfe09afc08f/frr-k8s-webhook-server/0.log" Feb 23 13:37:50.184234 master-0 kubenswrapper[26474]: I0223 13:37:50.184132 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-699c7b98cc-68v7n_b482fa9f-094a-41d1-8259-fc9d625f0b65/manager/0.log" Feb 23 13:37:50.211106 master-0 kubenswrapper[26474]: I0223 13:37:50.210174 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8bb78c4ff-kc79t_7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da/webhook-server/0.log" Feb 23 13:37:50.550972 master-0 kubenswrapper[26474]: I0223 13:37:50.550889 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/speaker/0.log" Feb 23 13:37:50.571810 master-0 kubenswrapper[26474]: I0223 13:37:50.558151 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/kube-rbac-proxy/0.log" Feb 23 13:37:50.846292 master-0 kubenswrapper[26474]: I0223 13:37:50.846257 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 23 13:37:51.118856 master-0 kubenswrapper[26474]: I0223 13:37:51.118742 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 23 13:37:51.138391 master-0 kubenswrapper[26474]: I0223 13:37:51.138321 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 23 13:37:51.158163 master-0 kubenswrapper[26474]: I0223 13:37:51.158106 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 23 13:37:51.172536 master-0 kubenswrapper[26474]: I0223 13:37:51.172480 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 23 13:37:51.194677 master-0 kubenswrapper[26474]: I0223 13:37:51.193658 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 23 13:37:51.215850 master-0 kubenswrapper[26474]: I0223 13:37:51.215802 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 23 13:37:51.235368 master-0 kubenswrapper[26474]: I0223 13:37:51.233646 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 23 13:37:51.274365 master-0 kubenswrapper[26474]: I0223 13:37:51.274024 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_29d3a080-c8a3-4359-9442-972bf4bb9b04/installer/0.log" Feb 23 13:37:51.318368 master-0 kubenswrapper[26474]: I0223 13:37:51.318151 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_c0dfc05d-bd62-4c0c-aae4-5d1f44de9449/installer/0.log" Feb 23 13:37:52.054958 master-0 kubenswrapper[26474]: I0223 13:37:52.054894 26474 scope.go:117] "RemoveContainer" containerID="1e1b78fc7afc6c59c1c73199b3d27f1953027bef7ba43752742a26d0b87ff636" Feb 23 13:37:52.109905 master-0 kubenswrapper[26474]: I0223 13:37:52.109846 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-646bd84bcd-mzrwt_17631a06-1002-4ff2-8d03-55948198b2ea/oauth-openshift/0.log" Feb 23 13:37:52.328464 master-0 kubenswrapper[26474]: I0223 13:37:52.328396 26474 scope.go:117] "RemoveContainer" containerID="ee03b1c0337465a7c0af6335c9c24e04c2082bff1206b33e05dfa5b89adaef3f" Feb 23 13:37:52.438651 master-0 kubenswrapper[26474]: I0223 13:37:52.438378 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-nktl9_e0063130-dfb5-4907-a000-f023a77c6441/assisted-installer-controller/0.log" Feb 23 13:37:52.857918 master-0 kubenswrapper[26474]: I0223 13:37:52.857880 26474 scope.go:117] "RemoveContainer" containerID="f8cb3139fd1f37714ba2920438ee197b026b137f0729d65acdde39b3e39285fa" Feb 23 13:37:52.900183 master-0 kubenswrapper[26474]: I0223 13:37:52.900159 26474 scope.go:117] "RemoveContainer" containerID="dcfbb177cfa5905f41b0ff0c848f08e880aabafd413f8269817da1537be9b3cd" Feb 23 13:37:52.951046 master-0 kubenswrapper[26474]: I0223 13:37:52.950993 26474 scope.go:117] "RemoveContainer" containerID="e001bd9a45325901f341110aa8fb045be2e1b672111501378c7ece19cc056ec0" Feb 23 13:37:53.002425 master-0 kubenswrapper[26474]: I0223 13:37:53.002076 26474 scope.go:117] "RemoveContainer" containerID="265e29f4d00888469263bbe789f7c7ad8ff43049bdfb6b0d69e56e28030069da" Feb 23 13:37:53.084582 master-0 kubenswrapper[26474]: I0223 13:37:53.084539 26474 scope.go:117] "RemoveContainer" containerID="8f9bed3fe78674372b2096a3e64768c7ae9ee6f2c2ca03cf9ea2571b6506cfbb" Feb 23 13:37:53.149902 master-0 kubenswrapper[26474]: I0223 13:37:53.149854 26474 scope.go:117] "RemoveContainer" containerID="12a4eab544abef5083adb6123f8f52ec9e162622794db36ce83ed338655072ab" Feb 23 13:37:53.185418 master-0 kubenswrapper[26474]: I0223 13:37:53.185312 26474 scope.go:117] "RemoveContainer" containerID="3a16901f9d43b9ca07c800b6b2c486dcca7c3ee7411f065ebdb99dc0965d96e7" Feb 23 13:37:53.210555 master-0 kubenswrapper[26474]: I0223 13:37:53.210508 26474 scope.go:117] "RemoveContainer" containerID="3fa61ec45fc04f3170b0bc41e5b5a71ae5d9aeb143891ca4a6a8046f8c617920" Feb 23 13:37:53.249562 master-0 kubenswrapper[26474]: I0223 13:37:53.248638 26474 scope.go:117] "RemoveContainer" containerID="7d0c9f77d42982e3f6c4c2535ecef555447834781b47a9a83f1a6d55f2f9cf00" Feb 23 13:37:53.312028 master-0 kubenswrapper[26474]: I0223 13:37:53.311326 26474 scope.go:117] "RemoveContainer" containerID="cf5631018557db759c7b8824aa3f2c163dd11007f294dce6a6e0c75cc64e9391" Feb 23 13:37:53.479374 master-0 kubenswrapper[26474]: I0223 13:37:53.477186 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-rlbcj_f348bffa-b2f6-4695-88a7-923625e7fb02/authentication-operator/0.log" Feb 23 13:37:53.492401 master-0 kubenswrapper[26474]: I0223 13:37:53.491478 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-rlbcj_f348bffa-b2f6-4695-88a7-923625e7fb02/authentication-operator/1.log" Feb 23 13:37:53.751995 master-0 kubenswrapper[26474]: I0223 13:37:53.751935 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-v9td5" event={"ID":"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d","Type":"ContainerStarted","Data":"de0a5d31ba22701e5ed4cb93a70f3b46a42dddf5d43018dc9c559f86079fd6de"} Feb 23 13:37:53.751995 master-0 kubenswrapper[26474]: I0223 13:37:53.751993 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/must-gather-v9td5" event={"ID":"bcb5f4c0-2acd-4936-b8e6-9709474d2f3d","Type":"ContainerStarted","Data":"d10195969c5f65e201100c3d0be9f38b6b319d292e400ad93a061287153fdf53"} Feb 23 13:37:53.794063 master-0 kubenswrapper[26474]: I0223 13:37:53.793949 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6l6d2/must-gather-v9td5" podStartSLOduration=2.474317026 podStartE2EDuration="13.793923931s" podCreationTimestamp="2026-02-23 13:37:40 +0000 UTC" firstStartedPulling="2026-02-23 13:37:41.580548197 +0000 UTC m=+1383.427055864" lastFinishedPulling="2026-02-23 13:37:52.900155102 +0000 UTC m=+1394.746662769" observedRunningTime="2026-02-23 13:37:53.783557497 +0000 UTC m=+1395.630065174" watchObservedRunningTime="2026-02-23 13:37:53.793923931 +0000 UTC m=+1395.640431608" Feb 23 13:37:54.459966 master-0 kubenswrapper[26474]: I0223 13:37:54.459912 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-kcfgf_73ba4f16-0217-4bf1-8fc2-6b385eda0771/router/3.log" Feb 23 13:37:54.471130 master-0 kubenswrapper[26474]: I0223 13:37:54.471067 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-kcfgf_73ba4f16-0217-4bf1-8fc2-6b385eda0771/router/2.log" Feb 23 13:37:54.679529 master-0 kubenswrapper[26474]: I0223 13:37:54.679113 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp"] Feb 23 13:37:54.680987 master-0 kubenswrapper[26474]: I0223 13:37:54.680958 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.689558 master-0 kubenswrapper[26474]: I0223 13:37:54.689506 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp"] Feb 23 13:37:54.806189 master-0 kubenswrapper[26474]: I0223 13:37:54.806115 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdjg\" (UniqueName: \"kubernetes.io/projected/c7e89242-ecc0-47a9-8a92-4664dec4792b-kube-api-access-phdjg\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.806189 master-0 kubenswrapper[26474]: I0223 13:37:54.806186 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-proc\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.806476 master-0 kubenswrapper[26474]: I0223 13:37:54.806244 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-sys\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.806476 master-0 kubenswrapper[26474]: I0223 13:37:54.806307 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-lib-modules\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.806543 master-0 kubenswrapper[26474]: I0223 13:37:54.806507 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-podres\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.908762 master-0 kubenswrapper[26474]: I0223 13:37:54.908403 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-podres\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.908762 master-0 kubenswrapper[26474]: I0223 13:37:54.908519 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phdjg\" (UniqueName: \"kubernetes.io/projected/c7e89242-ecc0-47a9-8a92-4664dec4792b-kube-api-access-phdjg\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.908762 master-0 kubenswrapper[26474]: I0223 13:37:54.908545 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-proc\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.908762 master-0 kubenswrapper[26474]: I0223 13:37:54.908577 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-sys\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.908762 master-0 kubenswrapper[26474]: I0223 13:37:54.908611 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-lib-modules\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.909104 master-0 kubenswrapper[26474]: I0223 13:37:54.908810 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-lib-modules\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.909854 master-0 kubenswrapper[26474]: I0223 13:37:54.909189 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-podres\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.909854 master-0 kubenswrapper[26474]: I0223 13:37:54.909610 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-proc\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.909854 master-0 kubenswrapper[26474]: I0223 13:37:54.909652 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c7e89242-ecc0-47a9-8a92-4664dec4792b-sys\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:54.931881 master-0 kubenswrapper[26474]: I0223 13:37:54.931831 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phdjg\" (UniqueName: \"kubernetes.io/projected/c7e89242-ecc0-47a9-8a92-4664dec4792b-kube-api-access-phdjg\") pod \"perf-node-gather-daemonset-hcwsp\" (UID: \"c7e89242-ecc0-47a9-8a92-4664dec4792b\") " pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:55.062115 master-0 kubenswrapper[26474]: I0223 13:37:55.062012 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:55.358627 master-0 kubenswrapper[26474]: I0223 13:37:55.350153 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6c65bdd8f8-vblb2_77ea2b54-bcc2-4c4e-9415-03984721b5b1/oauth-apiserver/0.log" Feb 23 13:37:55.379424 master-0 kubenswrapper[26474]: I0223 13:37:55.379377 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6c65bdd8f8-vblb2_77ea2b54-bcc2-4c4e-9415-03984721b5b1/fix-audit-permissions/0.log" Feb 23 13:37:55.546168 master-0 kubenswrapper[26474]: I0223 13:37:55.546117 26474 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp"] Feb 23 13:37:55.778682 master-0 kubenswrapper[26474]: I0223 13:37:55.778619 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" event={"ID":"c7e89242-ecc0-47a9-8a92-4664dec4792b","Type":"ContainerStarted","Data":"4feb8c9c081e8c5f982596329394c5cdbba50fa9496e4f3204b563cde65feb6b"} Feb 23 13:37:56.055570 master-0 kubenswrapper[26474]: I0223 13:37:56.055460 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/kube-rbac-proxy/0.log" Feb 23 13:37:56.077358 master-0 kubenswrapper[26474]: I0223 13:37:56.077286 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/cluster-autoscaler-operator/0.log" Feb 23 13:37:56.094015 master-0 kubenswrapper[26474]: I0223 13:37:56.093929 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-xljfn_bf57b864-25d7-4420-9052-04dd580a9f7d/cluster-autoscaler-operator/1.log" Feb 23 13:37:56.115194 master-0 kubenswrapper[26474]: I0223 13:37:56.115140 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/cluster-baremetal-operator/1.log" Feb 23 13:37:56.115731 master-0 kubenswrapper[26474]: I0223 13:37:56.115676 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/cluster-baremetal-operator/0.log" Feb 23 13:37:56.139441 master-0 kubenswrapper[26474]: I0223 13:37:56.139385 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-vfkqg_5ede583b-44b0-42af-92c9-f7b8938f7843/baremetal-kube-rbac-proxy/0.log" Feb 23 13:37:56.158865 master-0 kubenswrapper[26474]: I0223 13:37:56.158815 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-pqjsm_e5802841-52dc-4d15-a252-0eac70e9fbbc/control-plane-machine-set-operator/1.log" Feb 23 13:37:56.159586 master-0 kubenswrapper[26474]: I0223 13:37:56.159552 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-pqjsm_e5802841-52dc-4d15-a252-0eac70e9fbbc/control-plane-machine-set-operator/0.log" Feb 23 13:37:56.179492 master-0 kubenswrapper[26474]: I0223 13:37:56.179439 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/kube-rbac-proxy/0.log" Feb 23 13:37:56.199377 master-0 kubenswrapper[26474]: I0223 13:37:56.199325 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/machine-api-operator/0.log" Feb 23 13:37:56.200121 master-0 kubenswrapper[26474]: I0223 13:37:56.200105 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-nm845_47dedc5d-1288-4020-b481-5dca68a7d437/machine-api-operator/1.log" Feb 23 13:37:56.790774 master-0 kubenswrapper[26474]: I0223 13:37:56.790693 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" event={"ID":"c7e89242-ecc0-47a9-8a92-4664dec4792b","Type":"ContainerStarted","Data":"4e7a1f72ac51660f050c61939a985e49ff302860a66222c6d1c77f25553f3fe2"} Feb 23 13:37:56.791455 master-0 kubenswrapper[26474]: I0223 13:37:56.790879 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:37:57.589951 master-0 kubenswrapper[26474]: I0223 13:37:57.589843 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" podStartSLOduration=3.589823387 podStartE2EDuration="3.589823387s" podCreationTimestamp="2026-02-23 13:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:37:56.815075305 +0000 UTC m=+1398.661582982" watchObservedRunningTime="2026-02-23 13:37:57.589823387 +0000 UTC m=+1399.436331064" Feb 23 13:37:57.602784 master-0 kubenswrapper[26474]: I0223 13:37:57.602691 26474 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6l6d2/master-0-debug-8g9hs"] Feb 23 13:37:57.605191 master-0 kubenswrapper[26474]: I0223 13:37:57.605139 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.679976 master-0 kubenswrapper[26474]: I0223 13:37:57.679922 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf_5b459832-b875-49a6-a7c3-253fa6c8e45a/cluster-cloud-controller-manager/0.log" Feb 23 13:37:57.695909 master-0 kubenswrapper[26474]: I0223 13:37:57.695847 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf_5b459832-b875-49a6-a7c3-253fa6c8e45a/config-sync-controllers/0.log" Feb 23 13:37:57.706459 master-0 kubenswrapper[26474]: I0223 13:37:57.706408 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-xlbbf_5b459832-b875-49a6-a7c3-253fa6c8e45a/kube-rbac-proxy/0.log" Feb 23 13:37:57.778369 master-0 kubenswrapper[26474]: I0223 13:37:57.778271 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsp6\" (UniqueName: \"kubernetes.io/projected/cf574e1c-398d-4c13-acb5-1f318ff98c4a-kube-api-access-nfsp6\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.778369 master-0 kubenswrapper[26474]: I0223 13:37:57.778372 26474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf574e1c-398d-4c13-acb5-1f318ff98c4a-host\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.880943 master-0 kubenswrapper[26474]: I0223 13:37:57.880790 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfsp6\" (UniqueName: \"kubernetes.io/projected/cf574e1c-398d-4c13-acb5-1f318ff98c4a-kube-api-access-nfsp6\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.881477 master-0 kubenswrapper[26474]: I0223 13:37:57.881102 26474 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf574e1c-398d-4c13-acb5-1f318ff98c4a-host\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.881477 master-0 kubenswrapper[26474]: I0223 13:37:57.881190 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf574e1c-398d-4c13-acb5-1f318ff98c4a-host\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.906396 master-0 kubenswrapper[26474]: I0223 13:37:57.904548 26474 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfsp6\" (UniqueName: \"kubernetes.io/projected/cf574e1c-398d-4c13-acb5-1f318ff98c4a-kube-api-access-nfsp6\") pod \"master-0-debug-8g9hs\" (UID: \"cf574e1c-398d-4c13-acb5-1f318ff98c4a\") " pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:57.934054 master-0 kubenswrapper[26474]: I0223 13:37:57.930822 26474 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" Feb 23 13:37:58.814042 master-0 kubenswrapper[26474]: I0223 13:37:58.813965 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" event={"ID":"cf574e1c-398d-4c13-acb5-1f318ff98c4a","Type":"ContainerStarted","Data":"b03c40e5270cb34908f490ac8628b945f99c3a9b4ad46cdb34d8dae4a22a00c4"} Feb 23 13:37:59.164845 master-0 kubenswrapper[26474]: I0223 13:37:59.163084 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-api-0_9f5997bb-bc87-4f09-803c-65532cef8cca/cinder-083a9-api-log/0.log" Feb 23 13:37:59.179950 master-0 kubenswrapper[26474]: I0223 13:37:59.179782 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-api-0_9f5997bb-bc87-4f09-803c-65532cef8cca/cinder-api/0.log" Feb 23 13:37:59.283577 master-0 kubenswrapper[26474]: I0223 13:37:59.283502 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-backup-0_63ab4585-edb2-4419-a1ea-d84c96c68709/cinder-backup/0.log" Feb 23 13:37:59.303754 master-0 kubenswrapper[26474]: I0223 13:37:59.303157 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-backup-0_63ab4585-edb2-4419-a1ea-d84c96c68709/probe/0.log" Feb 23 13:37:59.327039 master-0 kubenswrapper[26474]: I0223 13:37:59.326963 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-db-sync-fnmxd_40e2f6ca-d0dd-49a1-a43c-1403fca6fc1b/cinder-083a9-db-sync/0.log" Feb 23 13:37:59.433096 master-0 kubenswrapper[26474]: I0223 13:37:59.432924 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-scheduler-0_f75e3be8-7018-4e2a-a798-f6e7d9a972ef/cinder-scheduler/0.log" Feb 23 13:37:59.476405 master-0 kubenswrapper[26474]: I0223 13:37:59.474382 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-scheduler-0_f75e3be8-7018-4e2a-a798-f6e7d9a972ef/probe/0.log" Feb 23 13:37:59.565404 master-0 kubenswrapper[26474]: I0223 13:37:59.565042 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-volume-lvm-iscsi-0_206b569e-fd7d-4b95-9655-aeffa25d2dda/cinder-volume/0.log" Feb 23 13:37:59.580130 master-0 kubenswrapper[26474]: I0223 13:37:59.578709 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-083a9-volume-lvm-iscsi-0_206b569e-fd7d-4b95-9655-aeffa25d2dda/probe/0.log" Feb 23 13:37:59.599949 master-0 kubenswrapper[26474]: I0223 13:37:59.599878 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-81ea-account-create-update-j42mq_391ff9cf-6e10-4566-8651-b240689eb1d5/mariadb-account-create-update/0.log" Feb 23 13:37:59.613562 master-0 kubenswrapper[26474]: I0223 13:37:59.612751 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-8mcwv_5661b4fe-3f9f-41cc-a335-ace88eda5968/mariadb-database-create/0.log" Feb 23 13:37:59.636099 master-0 kubenswrapper[26474]: I0223 13:37:59.636021 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7dfc76fc6c-ffppf_180291fa-ce8f-42e3-92c5-ec2c2bcb2e06/dnsmasq-dns/0.log" Feb 23 13:37:59.650326 master-0 kubenswrapper[26474]: I0223 13:37:59.648127 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7dfc76fc6c-ffppf_180291fa-ce8f-42e3-92c5-ec2c2bcb2e06/init/0.log" Feb 23 13:37:59.730503 master-0 kubenswrapper[26474]: I0223 13:37:59.729893 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-87hx7_945907dd-f6b3-400f-b539-e1310eb11dd7/kube-rbac-proxy/0.log" Feb 23 13:37:59.782367 master-0 kubenswrapper[26474]: I0223 13:37:59.779568 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4fec4-default-external-api-0_4392daca-d648-4740-b6ef-62a95c306245/glance-log/0.log" Feb 23 13:37:59.782367 master-0 kubenswrapper[26474]: I0223 13:37:59.780954 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-87hx7_945907dd-f6b3-400f-b539-e1310eb11dd7/cloud-credential-operator/0.log" Feb 23 13:37:59.799649 master-0 kubenswrapper[26474]: I0223 13:37:59.797156 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4fec4-default-external-api-0_4392daca-d648-4740-b6ef-62a95c306245/glance-httpd/0.log" Feb 23 13:37:59.888487 master-0 kubenswrapper[26474]: I0223 13:37:59.888430 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4fec4-default-internal-api-0_d053834d-7877-4283-b937-693f20d0c6a4/glance-log/0.log" Feb 23 13:37:59.904305 master-0 kubenswrapper[26474]: I0223 13:37:59.903591 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4fec4-default-internal-api-0_d053834d-7877-4283-b937-693f20d0c6a4/glance-httpd/0.log" Feb 23 13:37:59.926394 master-0 kubenswrapper[26474]: I0223 13:37:59.925823 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-tcp42_29a70f49-894f-470f-bbe1-205e6714fe94/glance-db-sync/0.log" Feb 23 13:37:59.942516 master-0 kubenswrapper[26474]: I0223 13:37:59.941703 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-c7db585c-7gj9c_4671c55a-f3de-4d2d-8717-452ee80d2691/ironic-api-log/0.log" Feb 23 13:37:59.975389 master-0 kubenswrapper[26474]: I0223 13:37:59.974894 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-c7db585c-7gj9c_4671c55a-f3de-4d2d-8717-452ee80d2691/ironic-api/0.log" Feb 23 13:37:59.989934 master-0 kubenswrapper[26474]: I0223 13:37:59.989620 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-c7db585c-7gj9c_4671c55a-f3de-4d2d-8717-452ee80d2691/init/0.log" Feb 23 13:38:00.031930 master-0 kubenswrapper[26474]: I0223 13:38:00.031828 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/ironic-conductor/0.log" Feb 23 13:38:00.045621 master-0 kubenswrapper[26474]: I0223 13:38:00.045544 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/httpboot/0.log" Feb 23 13:38:00.063298 master-0 kubenswrapper[26474]: I0223 13:38:00.062606 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/dnsmasq/0.log" Feb 23 13:38:00.075897 master-0 kubenswrapper[26474]: I0223 13:38:00.075830 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/init/0.log" Feb 23 13:38:00.086980 master-0 kubenswrapper[26474]: I0223 13:38:00.086895 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/ironic-python-agent-init/0.log" Feb 23 13:38:00.943831 master-0 kubenswrapper[26474]: I0223 13:38:00.943789 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_808fa98d-dace-4799-9059-a26510355d62/pxe-init/0.log" Feb 23 13:38:00.955189 master-0 kubenswrapper[26474]: I0223 13:38:00.955125 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-create-smgfj_323323e6-77db-4dbe-b877-c3875b42211a/mariadb-database-create/0.log" Feb 23 13:38:00.984231 master-0 kubenswrapper[26474]: I0223 13:38:00.984144 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-gzjvk_53c0fb4f-cbcb-4439-97c6-0b529f807785/ironic-db-sync/0.log" Feb 23 13:38:00.994174 master-0 kubenswrapper[26474]: I0223 13:38:00.994120 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-gzjvk_53c0fb4f-cbcb-4439-97c6-0b529f807785/init/0.log" Feb 23 13:38:01.008024 master-0 kubenswrapper[26474]: I0223 13:38:01.007947 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dc5b-account-create-update-t78mz_5a9ec263-54d6-4981-9127-e9bd62d1cf7d/mariadb-account-create-update/0.log" Feb 23 13:38:01.048632 master-0 kubenswrapper[26474]: I0223 13:38:01.048569 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/ironic-inspector-httpd/0.log" Feb 23 13:38:01.071410 master-0 kubenswrapper[26474]: I0223 13:38:01.068217 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/ironic-inspector/0.log" Feb 23 13:38:01.084497 master-0 kubenswrapper[26474]: I0223 13:38:01.084435 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/inspector-httpboot/0.log" Feb 23 13:38:01.092382 master-0 kubenswrapper[26474]: I0223 13:38:01.092279 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/ramdisk-logs/0.log" Feb 23 13:38:01.109426 master-0 kubenswrapper[26474]: I0223 13:38:01.109327 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/inspector-dnsmasq/0.log" Feb 23 13:38:01.119662 master-0 kubenswrapper[26474]: I0223 13:38:01.119035 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/ironic-python-agent-init/0.log" Feb 23 13:38:01.136695 master-0 kubenswrapper[26474]: I0223 13:38:01.136633 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_0894ec00-986c-4930-9ec0-6163e1e6f410/inspector-pxe-init/0.log" Feb 23 13:38:01.146920 master-0 kubenswrapper[26474]: I0223 13:38:01.146863 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-d172-account-create-update-hzwp2_d97872e6-b11d-4f3c-b9b9-65814a655637/mariadb-account-create-update/0.log" Feb 23 13:38:01.160243 master-0 kubenswrapper[26474]: I0223 13:38:01.160184 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-create-v4ftr_2417f017-58fa-40a9-bd0a-ac6557cadd27/mariadb-database-create/0.log" Feb 23 13:38:01.172861 master-0 kubenswrapper[26474]: I0223 13:38:01.172803 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-mrlgq_e9fcecfd-b548-47da-800d-24deb9834fa7/ironic-inspector-db-sync/0.log" Feb 23 13:38:01.188737 master-0 kubenswrapper[26474]: I0223 13:38:01.188660 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7d6b446974-djn5h_7af5b404-f4af-4e67-b355-916c6240db47/ironic-neutron-agent/2.log" Feb 23 13:38:01.191261 master-0 kubenswrapper[26474]: I0223 13:38:01.191218 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7d6b446974-djn5h_7af5b404-f4af-4e67-b355-916c6240db47/ironic-neutron-agent/1.log" Feb 23 13:38:01.204892 master-0 kubenswrapper[26474]: I0223 13:38:01.204482 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-s6q8s_fe117f39-7efc-4bfd-bed4-125b46267fd6/keystone-bootstrap/0.log" Feb 23 13:38:01.215417 master-0 kubenswrapper[26474]: I0223 13:38:01.213627 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-zkbd6_4d5d620a-05d5-4a5d-8e3b-596cf44d0713/keystone-db-sync/0.log" Feb 23 13:38:01.281814 master-0 kubenswrapper[26474]: I0223 13:38:01.281756 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f6c5d9dcc-k6qfz_16a4bd6f-20fe-4899-874b-5bde9553a934/keystone-api/0.log" Feb 23 13:38:01.996419 master-0 kubenswrapper[26474]: I0223 13:38:01.996361 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/2.log" Feb 23 13:38:02.000137 master-0 kubenswrapper[26474]: I0223 13:38:02.000092 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-config-operator/3.log" Feb 23 13:38:02.011197 master-0 kubenswrapper[26474]: I0223 13:38:02.011159 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-8wrb6_90a694bb-fe3e-4478-bbb4-d2be9cd4c57f/openshift-api/0.log" Feb 23 13:38:03.206726 master-0 kubenswrapper[26474]: I0223 13:38:03.206659 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-vn4fp_69b3a8a2-8b91-4115-9d4c-67fc97b36811/console-operator/0.log" Feb 23 13:38:04.134358 master-0 kubenswrapper[26474]: I0223 13:38:04.134064 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74b4bc9995-5bzrc_76aba90e-5944-46b7-8ef9-cad8855264c7/console/0.log" Feb 23 13:38:04.188311 master-0 kubenswrapper[26474]: I0223 13:38:04.188246 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-955b69498-grk54_449e8cbf-8db6-4709-b92f-a42410095ed2/download-server/0.log" Feb 23 13:38:05.112757 master-0 kubenswrapper[26474]: I0223 13:38:05.112699 26474 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-6l6d2/perf-node-gather-daemonset-hcwsp" Feb 23 13:38:05.240269 master-0 kubenswrapper[26474]: I0223 13:38:05.240225 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-gdvlh_affc63b7-db45-429d-82ff-e50f6aae51dc/cluster-storage-operator/0.log" Feb 23 13:38:05.242283 master-0 kubenswrapper[26474]: I0223 13:38:05.242243 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-gdvlh_affc63b7-db45-429d-82ff-e50f6aae51dc/cluster-storage-operator/1.log" Feb 23 13:38:05.256851 master-0 kubenswrapper[26474]: I0223 13:38:05.256802 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/3.log" Feb 23 13:38:05.257322 master-0 kubenswrapper[26474]: I0223 13:38:05.257279 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-zw4nq_5793184d-de96-49ad-a060-0fa0cf278a9c/snapshot-controller/4.log" Feb 23 13:38:05.296326 master-0 kubenswrapper[26474]: I0223 13:38:05.296274 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-f5n2p_4b9d6485-cf67-49c5-99c1-b8582a0bab70/csi-snapshot-controller-operator/0.log" Feb 23 13:38:05.298449 master-0 kubenswrapper[26474]: I0223 13:38:05.298423 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-f5n2p_4b9d6485-cf67-49c5-99c1-b8582a0bab70/csi-snapshot-controller-operator/1.log" Feb 23 13:38:06.247010 master-0 kubenswrapper[26474]: I0223 13:38:06.244929 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-g8fdn_f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/dns-operator/0.log" Feb 23 13:38:06.258425 master-0 kubenswrapper[26474]: I0223 13:38:06.257170 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-g8fdn_f40bda0c-c3ab-451f-bf60-b4eaf0a5a74b/kube-rbac-proxy/0.log" Feb 23 13:38:07.058522 master-0 kubenswrapper[26474]: I0223 13:38:07.058424 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ljd8h"] Feb 23 13:38:07.073718 master-0 kubenswrapper[26474]: I0223 13:38:07.073653 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ljd8h"] Feb 23 13:38:07.242424 master-0 kubenswrapper[26474]: I0223 13:38:07.242378 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ljphn_2acc6d35-5679-4fac-970f-3d2ff954cc33/dns/0.log" Feb 23 13:38:07.263694 master-0 kubenswrapper[26474]: I0223 13:38:07.263643 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ljphn_2acc6d35-5679-4fac-970f-3d2ff954cc33/kube-rbac-proxy/0.log" Feb 23 13:38:07.285689 master-0 kubenswrapper[26474]: I0223 13:38:07.285642 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-rxc8b_6dc83a57-34c5-4c64-97d3-b6191ba690eb/dns-node-resolver/0.log" Feb 23 13:38:08.182245 master-0 kubenswrapper[26474]: I0223 13:38:08.180094 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-dk5t4_d9b02d3c-f671-4850-8c6e-315044a1376c/etcd-operator/2.log" Feb 23 13:38:08.188544 master-0 kubenswrapper[26474]: I0223 13:38:08.187484 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-dk5t4_d9b02d3c-f671-4850-8c6e-315044a1376c/etcd-operator/1.log" Feb 23 13:38:08.416979 master-0 kubenswrapper[26474]: I0223 13:38:08.410690 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3b6be3-59de-456f-bf7e-2c40371e217e" path="/var/lib/kubelet/pods/1d3b6be3-59de-456f-bf7e-2c40371e217e/volumes" Feb 23 13:38:09.149417 master-0 kubenswrapper[26474]: I0223 13:38:09.149295 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 23 13:38:09.464572 master-0 kubenswrapper[26474]: I0223 13:38:09.464524 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 23 13:38:09.520363 master-0 kubenswrapper[26474]: I0223 13:38:09.519376 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 23 13:38:09.641003 master-0 kubenswrapper[26474]: I0223 13:38:09.640949 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bc6581a5-b49c-4ad1-abd5-cfd583858288/memcached/0.log" Feb 23 13:38:09.648244 master-0 kubenswrapper[26474]: I0223 13:38:09.648164 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 23 13:38:09.665669 master-0 kubenswrapper[26474]: I0223 13:38:09.665616 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 23 13:38:09.686350 master-0 kubenswrapper[26474]: I0223 13:38:09.686281 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 23 13:38:09.710028 master-0 kubenswrapper[26474]: I0223 13:38:09.709662 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 23 13:38:09.729301 master-0 kubenswrapper[26474]: I0223 13:38:09.729165 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 23 13:38:09.819416 master-0 kubenswrapper[26474]: I0223 13:38:09.810559 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_29d3a080-c8a3-4359-9442-972bf4bb9b04/installer/0.log" Feb 23 13:38:09.833230 master-0 kubenswrapper[26474]: I0223 13:38:09.832800 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-655989fbf7-gkzkz_26e2695d-d72c-443d-94c6-efb4b4a6d6fc/neutron-api/0.log" Feb 23 13:38:09.847366 master-0 kubenswrapper[26474]: I0223 13:38:09.844431 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-655989fbf7-gkzkz_26e2695d-d72c-443d-94c6-efb4b4a6d6fc/neutron-httpd/0.log" Feb 23 13:38:09.857201 master-0 kubenswrapper[26474]: I0223 13:38:09.856912 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-p4kq8_fac8cfc7-d688-489f-ac4e-71f9363b2c5b/mariadb-database-create/0.log" Feb 23 13:38:09.857881 master-0 kubenswrapper[26474]: I0223 13:38:09.857817 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_c0dfc05d-bd62-4c0c-aae4-5d1f44de9449/installer/0.log" Feb 23 13:38:09.885557 master-0 kubenswrapper[26474]: I0223 13:38:09.885490 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-kqjnt_c6e1b664-e2c1-4b99-b1b9-9281ef4142d3/neutron-db-sync/0.log" Feb 23 13:38:09.900001 master-0 kubenswrapper[26474]: I0223 13:38:09.899960 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-e436-account-create-update-975k9_8f17f46f-162c-49b6-88bb-d8792f7c354f/mariadb-account-create-update/0.log" Feb 23 13:38:09.992540 master-0 kubenswrapper[26474]: I0223 13:38:09.992419 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5b0108db-c0d2-418a-a77f-e251a3c6eec4/nova-api-log/0.log" Feb 23 13:38:10.096061 master-0 kubenswrapper[26474]: I0223 13:38:10.095915 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5b0108db-c0d2-418a-a77f-e251a3c6eec4/nova-api-api/0.log" Feb 23 13:38:10.104758 master-0 kubenswrapper[26474]: I0223 13:38:10.104669 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-46be-account-create-update-v6qrd_c2bb7e81-33b2-457d-98f9-0a9114ce13f4/mariadb-account-create-update/0.log" Feb 23 13:38:10.119329 master-0 kubenswrapper[26474]: I0223 13:38:10.117821 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-whpzb_882d9e69-bb35-4914-a2ef-42fb05306b3a/mariadb-database-create/0.log" Feb 23 13:38:10.136593 master-0 kubenswrapper[26474]: I0223 13:38:10.136551 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-7274-account-create-update-2r52m_c631041f-70ef-485e-9113-f33f737f91f7/mariadb-account-create-update/0.log" Feb 23 13:38:10.148402 master-0 kubenswrapper[26474]: I0223 13:38:10.148351 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-ng4xw_044ee558-e330-4d4c-acbc-05bfdc4cb4e0/nova-manage/0.log" Feb 23 13:38:10.272428 master-0 kubenswrapper[26474]: I0223 13:38:10.272355 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_71e3b8dd-9b1b-43d6-bbd9-34481d6a43ff/nova-cell0-conductor-conductor/0.log" Feb 23 13:38:10.292447 master-0 kubenswrapper[26474]: I0223 13:38:10.292375 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-d78v8_3c28125b-7561-4020-90ab-9dd7bbd740f3/nova-cell0-conductor-db-sync/0.log" Feb 23 13:38:10.302166 master-0 kubenswrapper[26474]: I0223 13:38:10.302084 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-qw4db_5cd82ed9-d70c-4c60-afb4-63db5a57aa79/mariadb-database-create/0.log" Feb 23 13:38:10.312196 master-0 kubenswrapper[26474]: I0223 13:38:10.312167 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-222a-account-create-update-6g2mv_3071f1bd-cdeb-433e-bd4b-a02190489f95/mariadb-account-create-update/0.log" Feb 23 13:38:10.338814 master-0 kubenswrapper[26474]: I0223 13:38:10.338755 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-7c9bb_f0dc82dc-15d2-4c67-be7c-ce4f798dca77/nova-manage/0.log" Feb 23 13:38:10.412728 master-0 kubenswrapper[26474]: I0223 13:38:10.412661 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_4fe6487b-2c95-4029-b4da-8970da075f5d/nova-cell1-compute-ironic-compute-compute/0.log" Feb 23 13:38:10.542402 master-0 kubenswrapper[26474]: I0223 13:38:10.542269 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_461d5a34-8400-467c-acc6-91ce6bad84ee/nova-cell1-conductor-conductor/0.log" Feb 23 13:38:10.556137 master-0 kubenswrapper[26474]: I0223 13:38:10.556086 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-6mnk7_5f1f58b0-74b0-4553-9667-9fe3a18fc4bf/nova-cell1-conductor-db-sync/0.log" Feb 23 13:38:10.564269 master-0 kubenswrapper[26474]: I0223 13:38:10.564219 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-bqkcr_199a7a01-a57b-473d-a293-4b88a1a946c8/mariadb-database-create/0.log" Feb 23 13:38:10.574802 master-0 kubenswrapper[26474]: I0223 13:38:10.574742 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-host-discover-gn8sx_222d4bdd-c14f-480f-9a90-c89fc731cf45/nova-manage/0.log" Feb 23 13:38:10.646064 master-0 kubenswrapper[26474]: I0223 13:38:10.645980 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_848d5b91-e789-4b14-b3a0-347ff6e6656b/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 13:38:10.741854 master-0 kubenswrapper[26474]: I0223 13:38:10.741804 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4957a00c-e6d5-41c3-b294-65096a96c89d/nova-metadata-log/0.log" Feb 23 13:38:10.790498 master-0 kubenswrapper[26474]: I0223 13:38:10.790426 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-shphl_92eaa2e2-61cd-4279-a81f-72db51308148/cluster-image-registry-operator/0.log" Feb 23 13:38:10.820467 master-0 kubenswrapper[26474]: I0223 13:38:10.820235 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-shphl_92eaa2e2-61cd-4279-a81f-72db51308148/cluster-image-registry-operator/1.log" Feb 23 13:38:10.837884 master-0 kubenswrapper[26474]: I0223 13:38:10.837692 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-v7pp6_2fcfa52b-56eb-4399-88b8-5810794ad070/node-ca/0.log" Feb 23 13:38:10.965712 master-0 kubenswrapper[26474]: I0223 13:38:10.965380 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4957a00c-e6d5-41c3-b294-65096a96c89d/nova-metadata-metadata/0.log" Feb 23 13:38:11.061100 master-0 kubenswrapper[26474]: I0223 13:38:11.061036 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_56d49a97-e7b2-4dbe-a0b1-6aedcace0d27/nova-scheduler-scheduler/0.log" Feb 23 13:38:11.084364 master-0 kubenswrapper[26474]: I0223 13:38:11.084123 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d0ffdf-e5a8-457e-ad9d-e23dd25679d1/galera/0.log" Feb 23 13:38:11.096879 master-0 kubenswrapper[26474]: I0223 13:38:11.096739 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d0ffdf-e5a8-457e-ad9d-e23dd25679d1/mysql-bootstrap/0.log" Feb 23 13:38:11.137848 master-0 kubenswrapper[26474]: I0223 13:38:11.137790 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7466826b-28a5-465e-9f60-484489173aa4/galera/0.log" Feb 23 13:38:11.158352 master-0 kubenswrapper[26474]: I0223 13:38:11.158286 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7466826b-28a5-465e-9f60-484489173aa4/mysql-bootstrap/0.log" Feb 23 13:38:11.166789 master-0 kubenswrapper[26474]: I0223 13:38:11.166734 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7a9cd3b6-e25a-4e3f-a7c9-d56b2a98ae83/openstackclient/0.log" Feb 23 13:38:11.179831 master-0 kubenswrapper[26474]: I0223 13:38:11.179780 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-k9gmf_a78635aa-f6c7-4fa7-8b1a-dab980e7e9ea/openstack-network-exporter/0.log" Feb 23 13:38:11.195620 master-0 kubenswrapper[26474]: I0223 13:38:11.195550 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlg7s_b309cab5-6d67-43b3-9a21-323910978e12/ovsdb-server/0.log" Feb 23 13:38:11.207051 master-0 kubenswrapper[26474]: I0223 13:38:11.206995 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlg7s_b309cab5-6d67-43b3-9a21-323910978e12/ovs-vswitchd/0.log" Feb 23 13:38:11.215504 master-0 kubenswrapper[26474]: I0223 13:38:11.215455 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlg7s_b309cab5-6d67-43b3-9a21-323910978e12/ovsdb-server-init/0.log" Feb 23 13:38:11.233091 master-0 kubenswrapper[26474]: I0223 13:38:11.233029 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qzxjb_106bae0e-78dc-455a-bca3-35057d5a145a/ovn-controller/0.log" Feb 23 13:38:11.247148 master-0 kubenswrapper[26474]: I0223 13:38:11.247083 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4e61746e-5d88-4e65-876e-940b3299dae0/ovn-northd/0.log" Feb 23 13:38:11.251724 master-0 kubenswrapper[26474]: I0223 13:38:11.251675 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4e61746e-5d88-4e65-876e-940b3299dae0/openstack-network-exporter/0.log" Feb 23 13:38:11.266288 master-0 kubenswrapper[26474]: I0223 13:38:11.266234 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52bedc33-0750-4848-8abe-20a303ef99e5/ovsdbserver-nb/0.log" Feb 23 13:38:11.274899 master-0 kubenswrapper[26474]: I0223 13:38:11.274852 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52bedc33-0750-4848-8abe-20a303ef99e5/openstack-network-exporter/0.log" Feb 23 13:38:11.293215 master-0 kubenswrapper[26474]: I0223 13:38:11.293167 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94/ovsdbserver-sb/0.log" Feb 23 13:38:11.300703 master-0 kubenswrapper[26474]: I0223 13:38:11.300643 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da0d2cda-0b9f-4cd2-a7b8-1642a7cd0f94/openstack-network-exporter/0.log" Feb 23 13:38:11.342018 master-0 kubenswrapper[26474]: I0223 13:38:11.341717 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c5c8b54d6-wsqw8_3911d376-fa1c-4062-bce7-7cd07d9d3244/placement-log/0.log" Feb 23 13:38:11.361375 master-0 kubenswrapper[26474]: I0223 13:38:11.358837 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c5c8b54d6-wsqw8_3911d376-fa1c-4062-bce7-7cd07d9d3244/placement-api/0.log" Feb 23 13:38:11.387369 master-0 kubenswrapper[26474]: I0223 13:38:11.383649 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-jwdv9_51bc3c75-4dd4-4b8d-8fab-9035be69b72d/placement-db-sync/0.log" Feb 23 13:38:11.430303 master-0 kubenswrapper[26474]: I0223 13:38:11.430254 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9502e2b0-2a39-47b6-b482-f13048ccdf41/rabbitmq/0.log" Feb 23 13:38:11.436234 master-0 kubenswrapper[26474]: I0223 13:38:11.436139 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9502e2b0-2a39-47b6-b482-f13048ccdf41/setup-container/0.log" Feb 23 13:38:11.505473 master-0 kubenswrapper[26474]: I0223 13:38:11.505407 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_08e48693-a2aa-426e-9718-5484046f9a4e/rabbitmq/0.log" Feb 23 13:38:11.511765 master-0 kubenswrapper[26474]: I0223 13:38:11.511683 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_08e48693-a2aa-426e-9718-5484046f9a4e/setup-container/0.log" Feb 23 13:38:11.582883 master-0 kubenswrapper[26474]: I0223 13:38:11.582820 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-b49747f5d-8c8s8_f02e5309-165c-41a7-b1d8-f433e9688643/proxy-httpd/0.log" Feb 23 13:38:11.601314 master-0 kubenswrapper[26474]: I0223 13:38:11.601127 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-b49747f5d-8c8s8_f02e5309-165c-41a7-b1d8-f433e9688643/proxy-server/0.log" Feb 23 13:38:11.613076 master-0 kubenswrapper[26474]: I0223 13:38:11.613035 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-8zvqt_f760f819-5bdd-4b3b-9374-7bde76377f34/swift-ring-rebalance/0.log" Feb 23 13:38:11.652776 master-0 kubenswrapper[26474]: I0223 13:38:11.652727 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/account-server/0.log" Feb 23 13:38:11.667836 master-0 kubenswrapper[26474]: I0223 13:38:11.667749 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/account-replicator/0.log" Feb 23 13:38:11.674785 master-0 kubenswrapper[26474]: I0223 13:38:11.674741 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/account-auditor/0.log" Feb 23 13:38:11.682588 master-0 kubenswrapper[26474]: I0223 13:38:11.682545 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/account-reaper/0.log" Feb 23 13:38:11.690012 master-0 kubenswrapper[26474]: I0223 13:38:11.689976 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/container-server/0.log" Feb 23 13:38:11.703636 master-0 kubenswrapper[26474]: I0223 13:38:11.703590 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/container-replicator/0.log" Feb 23 13:38:11.709644 master-0 kubenswrapper[26474]: I0223 13:38:11.709492 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/container-auditor/0.log" Feb 23 13:38:11.719114 master-0 kubenswrapper[26474]: I0223 13:38:11.718972 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/container-updater/0.log" Feb 23 13:38:11.728240 master-0 kubenswrapper[26474]: I0223 13:38:11.728196 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/object-server/0.log" Feb 23 13:38:11.739123 master-0 kubenswrapper[26474]: I0223 13:38:11.739071 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/object-replicator/0.log" Feb 23 13:38:11.744242 master-0 kubenswrapper[26474]: I0223 13:38:11.744186 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/3.log" Feb 23 13:38:11.744686 master-0 kubenswrapper[26474]: I0223 13:38:11.744640 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/ingress-operator/4.log" Feb 23 13:38:11.752816 master-0 kubenswrapper[26474]: I0223 13:38:11.752767 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/object-auditor/0.log" Feb 23 13:38:11.757401 master-0 kubenswrapper[26474]: I0223 13:38:11.757322 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-k9h69_878aa813-a8b9-4a6f-8086-778df276d0d7/kube-rbac-proxy/0.log" Feb 23 13:38:11.760944 master-0 kubenswrapper[26474]: I0223 13:38:11.760885 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/object-updater/0.log" Feb 23 13:38:11.768893 master-0 kubenswrapper[26474]: I0223 13:38:11.768620 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/object-expirer/0.log" Feb 23 13:38:11.778186 master-0 kubenswrapper[26474]: I0223 13:38:11.777750 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/rsync/0.log" Feb 23 13:38:11.796480 master-0 kubenswrapper[26474]: I0223 13:38:11.796225 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7172c21a-db8e-428a-9a0c-5ef060abafd3/swift-recon-cron/0.log" Feb 23 13:38:12.726125 master-0 kubenswrapper[26474]: I0223 13:38:12.725925 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-t9cng_f5488d6b-d85c-4a31-a34e-ae5c41b95d18/serve-healthcheck-canary/0.log" Feb 23 13:38:13.620819 master-0 kubenswrapper[26474]: I0223 13:38:13.620707 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-sswng_ce55de54-8441-4a16-8b57-598042869000/insights-operator/0.log" Feb 23 13:38:16.066446 master-0 kubenswrapper[26474]: I0223 13:38:16.066373 26474 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" event={"ID":"cf574e1c-398d-4c13-acb5-1f318ff98c4a","Type":"ContainerStarted","Data":"5c5c32381cd27678a6c732c9e0f22795fe2d84990700234e743b20d6ea776493"} Feb 23 13:38:16.091658 master-0 kubenswrapper[26474]: I0223 13:38:16.091561 26474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6l6d2/master-0-debug-8g9hs" podStartSLOduration=2.306465972 podStartE2EDuration="19.091536419s" podCreationTimestamp="2026-02-23 13:37:57 +0000 UTC" firstStartedPulling="2026-02-23 13:37:57.99286671 +0000 UTC m=+1399.839374397" lastFinishedPulling="2026-02-23 13:38:14.777937167 +0000 UTC m=+1416.624444844" observedRunningTime="2026-02-23 13:38:16.079276869 +0000 UTC m=+1417.925784566" watchObservedRunningTime="2026-02-23 13:38:16.091536419 +0000 UTC m=+1417.938044116" Feb 23 13:38:16.112063 master-0 kubenswrapper[26474]: I0223 13:38:16.112002 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/alertmanager/0.log" Feb 23 13:38:16.133971 master-0 kubenswrapper[26474]: I0223 13:38:16.133818 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/config-reloader/0.log" Feb 23 13:38:16.151567 master-0 kubenswrapper[26474]: I0223 13:38:16.151496 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/kube-rbac-proxy-web/0.log" Feb 23 13:38:16.170353 master-0 kubenswrapper[26474]: I0223 13:38:16.170294 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/kube-rbac-proxy/0.log" Feb 23 13:38:16.188677 master-0 kubenswrapper[26474]: I0223 13:38:16.188520 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/kube-rbac-proxy-metric/0.log" Feb 23 13:38:16.203366 master-0 kubenswrapper[26474]: I0223 13:38:16.203289 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/prom-label-proxy/0.log" Feb 23 13:38:16.218430 master-0 kubenswrapper[26474]: I0223 13:38:16.218381 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_a2390c53-10a9-46d0-be73-d3ed303df396/init-config-reloader/0.log" Feb 23 13:38:16.280938 master-0 kubenswrapper[26474]: I0223 13:38:16.279771 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6bb6d78bf-gjp8h_9bed6748-374e-4d8a-92a0-36d7d735d6b7/cluster-monitoring-operator/0.log" Feb 23 13:38:16.300425 master-0 kubenswrapper[26474]: I0223 13:38:16.299560 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-r66qv_9ea16701-bd22-4fc0-90ea-f114b52574f8/kube-state-metrics/0.log" Feb 23 13:38:16.316749 master-0 kubenswrapper[26474]: I0223 13:38:16.316702 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-r66qv_9ea16701-bd22-4fc0-90ea-f114b52574f8/kube-rbac-proxy-main/0.log" Feb 23 13:38:16.329626 master-0 kubenswrapper[26474]: I0223 13:38:16.329594 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-r66qv_9ea16701-bd22-4fc0-90ea-f114b52574f8/kube-rbac-proxy-self/0.log" Feb 23 13:38:16.351322 master-0 kubenswrapper[26474]: I0223 13:38:16.351163 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-79f8868b4-qms96_572b4e84-443f-4a5e-9f3a-c92bc899c245/metrics-server/0.log" Feb 23 13:38:16.374224 master-0 kubenswrapper[26474]: I0223 13:38:16.374171 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-68cc5f9c9b-5htrk_675c2eae-ac56-4577-a599-884489d744af/monitoring-plugin/0.log" Feb 23 13:38:16.411788 master-0 kubenswrapper[26474]: I0223 13:38:16.411686 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-tv6s2_ae8b0e50-59ee-44a9-9a66-8febb833b771/node-exporter/0.log" Feb 23 13:38:16.433770 master-0 kubenswrapper[26474]: I0223 13:38:16.433671 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-tv6s2_ae8b0e50-59ee-44a9-9a66-8febb833b771/kube-rbac-proxy/0.log" Feb 23 13:38:16.451295 master-0 kubenswrapper[26474]: I0223 13:38:16.451206 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-tv6s2_ae8b0e50-59ee-44a9-9a66-8febb833b771/init-textfile/0.log" Feb 23 13:38:16.470066 master-0 kubenswrapper[26474]: I0223 13:38:16.469994 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-p6hj2_3ccbaed9-ab28-47c0-a585-648b9251fd11/kube-rbac-proxy-main/0.log" Feb 23 13:38:16.488477 master-0 kubenswrapper[26474]: I0223 13:38:16.488430 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-p6hj2_3ccbaed9-ab28-47c0-a585-648b9251fd11/kube-rbac-proxy-self/0.log" Feb 23 13:38:16.507873 master-0 kubenswrapper[26474]: I0223 13:38:16.507809 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-p6hj2_3ccbaed9-ab28-47c0-a585-648b9251fd11/openshift-state-metrics/0.log" Feb 23 13:38:16.547783 master-0 kubenswrapper[26474]: I0223 13:38:16.547707 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/prometheus/0.log" Feb 23 13:38:16.562138 master-0 kubenswrapper[26474]: I0223 13:38:16.562079 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/config-reloader/0.log" Feb 23 13:38:16.579572 master-0 kubenswrapper[26474]: I0223 13:38:16.579482 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/thanos-sidecar/0.log" Feb 23 13:38:16.596034 master-0 kubenswrapper[26474]: I0223 13:38:16.595986 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/kube-rbac-proxy-web/0.log" Feb 23 13:38:16.613530 master-0 kubenswrapper[26474]: I0223 13:38:16.613460 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/kube-rbac-proxy/0.log" Feb 23 13:38:16.635525 master-0 kubenswrapper[26474]: I0223 13:38:16.635481 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/kube-rbac-proxy-thanos/0.log" Feb 23 13:38:16.650033 master-0 kubenswrapper[26474]: I0223 13:38:16.649982 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_fbca6439-68f9-4cac-b0e6-1f66ff0aa11f/init-config-reloader/0.log" Feb 23 13:38:16.678647 master-0 kubenswrapper[26474]: I0223 13:38:16.678554 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-2ksrm_b0a29266-d968-444d-82bb-085ff1d6e506/prometheus-operator/0.log" Feb 23 13:38:16.691187 master-0 kubenswrapper[26474]: I0223 13:38:16.691140 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-2ksrm_b0a29266-d968-444d-82bb-085ff1d6e506/kube-rbac-proxy/0.log" Feb 23 13:38:16.710206 master-0 kubenswrapper[26474]: I0223 13:38:16.710163 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-75d56db95f-ld22t_54001c8e-cb57-47dc-8594-9daed4190bda/prometheus-operator-admission-webhook/0.log" Feb 23 13:38:16.732360 master-0 kubenswrapper[26474]: I0223 13:38:16.732274 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6fd6fdc9d8-j4mb2_ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa/telemeter-client/0.log" Feb 23 13:38:16.751789 master-0 kubenswrapper[26474]: I0223 13:38:16.751145 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6fd6fdc9d8-j4mb2_ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa/reload/0.log" Feb 23 13:38:16.763861 master-0 kubenswrapper[26474]: I0223 13:38:16.763818 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6fd6fdc9d8-j4mb2_ba5102bd-8d1e-4001-ba0c-6c1e2b3ca4fa/kube-rbac-proxy/0.log" Feb 23 13:38:16.794320 master-0 kubenswrapper[26474]: I0223 13:38:16.794265 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/thanos-query/0.log" Feb 23 13:38:16.810510 master-0 kubenswrapper[26474]: I0223 13:38:16.810377 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/kube-rbac-proxy-web/0.log" Feb 23 13:38:16.826881 master-0 kubenswrapper[26474]: I0223 13:38:16.826823 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/kube-rbac-proxy/0.log" Feb 23 13:38:16.841754 master-0 kubenswrapper[26474]: I0223 13:38:16.841714 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/prom-label-proxy/0.log" Feb 23 13:38:16.857270 master-0 kubenswrapper[26474]: I0223 13:38:16.857209 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/kube-rbac-proxy-rules/0.log" Feb 23 13:38:16.875424 master-0 kubenswrapper[26474]: I0223 13:38:16.875379 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8595d4f886-qqtst_fa9a71b5-a37b-418a-b602-8eb3a94566b3/kube-rbac-proxy-metrics/0.log" Feb 23 13:38:17.061411 master-0 kubenswrapper[26474]: I0223 13:38:17.061154 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-81ea-account-create-update-j42mq"] Feb 23 13:38:17.087468 master-0 kubenswrapper[26474]: I0223 13:38:17.087402 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-p4kq8"] Feb 23 13:38:17.100406 master-0 kubenswrapper[26474]: I0223 13:38:17.098910 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tcp42"] Feb 23 13:38:17.110668 master-0 kubenswrapper[26474]: I0223 13:38:17.110300 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e436-account-create-update-975k9"] Feb 23 13:38:17.119822 master-0 kubenswrapper[26474]: I0223 13:38:17.119761 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8mcwv"] Feb 23 13:38:17.129531 master-0 kubenswrapper[26474]: I0223 13:38:17.129449 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-81ea-account-create-update-j42mq"] Feb 23 13:38:17.139797 master-0 kubenswrapper[26474]: I0223 13:38:17.139684 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-p4kq8"] Feb 23 13:38:17.153461 master-0 kubenswrapper[26474]: I0223 13:38:17.153370 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tcp42"] Feb 23 13:38:17.165827 master-0 kubenswrapper[26474]: I0223 13:38:17.165747 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8mcwv"] Feb 23 13:38:17.177475 master-0 kubenswrapper[26474]: I0223 13:38:17.177192 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e436-account-create-update-975k9"] Feb 23 13:38:18.408496 master-0 kubenswrapper[26474]: I0223 13:38:18.408358 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a70f49-894f-470f-bbe1-205e6714fe94" path="/var/lib/kubelet/pods/29a70f49-894f-470f-bbe1-205e6714fe94/volumes" Feb 23 13:38:18.409730 master-0 kubenswrapper[26474]: I0223 13:38:18.409497 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391ff9cf-6e10-4566-8651-b240689eb1d5" path="/var/lib/kubelet/pods/391ff9cf-6e10-4566-8651-b240689eb1d5/volumes" Feb 23 13:38:18.410538 master-0 kubenswrapper[26474]: I0223 13:38:18.410342 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5661b4fe-3f9f-41cc-a335-ace88eda5968" path="/var/lib/kubelet/pods/5661b4fe-3f9f-41cc-a335-ace88eda5968/volumes" Feb 23 13:38:18.411390 master-0 kubenswrapper[26474]: I0223 13:38:18.411366 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f17f46f-162c-49b6-88bb-d8792f7c354f" path="/var/lib/kubelet/pods/8f17f46f-162c-49b6-88bb-d8792f7c354f/volumes" Feb 23 13:38:18.412652 master-0 kubenswrapper[26474]: I0223 13:38:18.412629 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac8cfc7-d688-489f-ac4e-71f9363b2c5b" path="/var/lib/kubelet/pods/fac8cfc7-d688-489f-ac4e-71f9363b2c5b/volumes" Feb 23 13:38:18.660785 master-0 kubenswrapper[26474]: I0223 13:38:18.660662 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/controller/0.log" Feb 23 13:38:18.672216 master-0 kubenswrapper[26474]: I0223 13:38:18.672159 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/kube-rbac-proxy/0.log" Feb 23 13:38:18.700223 master-0 kubenswrapper[26474]: I0223 13:38:18.700175 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/controller/0.log" Feb 23 13:38:19.945871 master-0 kubenswrapper[26474]: I0223 13:38:19.945803 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr/0.log" Feb 23 13:38:19.968253 master-0 kubenswrapper[26474]: I0223 13:38:19.968198 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/reloader/0.log" Feb 23 13:38:19.990908 master-0 kubenswrapper[26474]: I0223 13:38:19.985806 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr-metrics/0.log" Feb 23 13:38:20.003863 master-0 kubenswrapper[26474]: I0223 13:38:20.003803 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy/0.log" Feb 23 13:38:20.013091 master-0 kubenswrapper[26474]: I0223 13:38:20.013038 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy-frr/0.log" Feb 23 13:38:20.024863 master-0 kubenswrapper[26474]: I0223 13:38:20.024815 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-frr-files/0.log" Feb 23 13:38:20.037394 master-0 kubenswrapper[26474]: I0223 13:38:20.036643 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-reloader/0.log" Feb 23 13:38:20.051985 master-0 kubenswrapper[26474]: I0223 13:38:20.051866 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-metrics/0.log" Feb 23 13:38:20.072056 master-0 kubenswrapper[26474]: I0223 13:38:20.071836 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-rmg9d_7d38b74b-692d-400d-9cb0-bdfe09afc08f/frr-k8s-webhook-server/0.log" Feb 23 13:38:20.113904 master-0 kubenswrapper[26474]: I0223 13:38:20.113858 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-699c7b98cc-68v7n_b482fa9f-094a-41d1-8259-fc9d625f0b65/manager/0.log" Feb 23 13:38:20.136667 master-0 kubenswrapper[26474]: I0223 13:38:20.136629 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8bb78c4ff-kc79t_7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da/webhook-server/0.log" Feb 23 13:38:20.484575 master-0 kubenswrapper[26474]: I0223 13:38:20.484509 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/speaker/0.log" Feb 23 13:38:20.503740 master-0 kubenswrapper[26474]: I0223 13:38:20.503407 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/kube-rbac-proxy/0.log" Feb 23 13:38:20.673369 master-0 kubenswrapper[26474]: I0223 13:38:20.673303 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8_b520dde4-e5e6-48b9-ae45-e96dc19be06f/extract/0.log" Feb 23 13:38:20.683001 master-0 kubenswrapper[26474]: I0223 13:38:20.682955 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8_b520dde4-e5e6-48b9-ae45-e96dc19be06f/util/0.log" Feb 23 13:38:20.695217 master-0 kubenswrapper[26474]: I0223 13:38:20.695140 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda144r7z8_b520dde4-e5e6-48b9-ae45-e96dc19be06f/pull/0.log" Feb 23 13:38:23.171434 master-0 kubenswrapper[26474]: I0223 13:38:23.169278 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-sj5wd_0d58817c-970f-47b1-a5a5-a491f3e93426/cluster-node-tuning-operator/1.log" Feb 23 13:38:23.171434 master-0 kubenswrapper[26474]: I0223 13:38:23.169946 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-sj5wd_0d58817c-970f-47b1-a5a5-a491f3e93426/cluster-node-tuning-operator/0.log" Feb 23 13:38:23.212387 master-0 kubenswrapper[26474]: I0223 13:38:23.203216 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-mjpd9_8422281d-af45-4f17-8f15-ac3fd9da4bbc/tuned/0.log" Feb 23 13:38:24.087859 master-0 kubenswrapper[26474]: I0223 13:38:24.087713 26474 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zkbd6"] Feb 23 13:38:24.109097 master-0 kubenswrapper[26474]: I0223 13:38:24.109026 26474 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zkbd6"] Feb 23 13:38:24.313942 master-0 kubenswrapper[26474]: I0223 13:38:24.313890 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/controller/0.log" Feb 23 13:38:24.320262 master-0 kubenswrapper[26474]: I0223 13:38:24.320209 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6bhkv_c274d379-166c-4181-9e3b-8a27e4bfcc8e/kube-rbac-proxy/0.log" Feb 23 13:38:24.365492 master-0 kubenswrapper[26474]: I0223 13:38:24.364668 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/controller/0.log" Feb 23 13:38:24.410441 master-0 kubenswrapper[26474]: I0223 13:38:24.410321 26474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5d620a-05d5-4a5d-8e3b-596cf44d0713" path="/var/lib/kubelet/pods/4d5d620a-05d5-4a5d-8e3b-596cf44d0713/volumes" Feb 23 13:38:24.552940 master-0 kubenswrapper[26474]: I0223 13:38:24.552887 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jj5xq_8a5df28b-7125-4bf9-81fd-bbf8b9e5dec4/manager/0.log" Feb 23 13:38:25.280426 master-0 kubenswrapper[26474]: I0223 13:38:25.280367 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-cj6hr_4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/kube-apiserver-operator/0.log" Feb 23 13:38:25.288782 master-0 kubenswrapper[26474]: I0223 13:38:25.288719 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-cj6hr_4eab6dca-0e70-49d3-9e4b-e5dba46c0a1a/kube-apiserver-operator/1.log" Feb 23 13:38:25.661459 master-0 kubenswrapper[26474]: I0223 13:38:25.661308 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-67nk6_22f5345b-e9af-4977-b60c-bd8c78e5dbf5/manager/0.log" Feb 23 13:38:25.674529 master-0 kubenswrapper[26474]: I0223 13:38:25.674490 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-v6gfb_13a0dd7d-3565-428b-b1f6-75a3956c808b/manager/0.log" Feb 23 13:38:25.809518 master-0 kubenswrapper[26474]: I0223 13:38:25.809457 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-xbnp5_8904511a-508f-4a31-a4b0-15cb665eeb6d/manager/0.log" Feb 23 13:38:25.827409 master-0 kubenswrapper[26474]: I0223 13:38:25.822176 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kn7n2_0a34c409-b181-4673-8983-0195a142c22d/manager/0.log" Feb 23 13:38:25.840395 master-0 kubenswrapper[26474]: I0223 13:38:25.840313 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-fphhs_6dc3991b-f1a6-436f-aa4c-19d6e5e1d376/manager/0.log" Feb 23 13:38:26.207628 master-0 kubenswrapper[26474]: I0223 13:38:26.207578 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr/0.log" Feb 23 13:38:26.220205 master-0 kubenswrapper[26474]: I0223 13:38:26.220153 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/reloader/0.log" Feb 23 13:38:26.222496 master-0 kubenswrapper[26474]: I0223 13:38:26.222462 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5f879c76b6-q5tzm_6ad5cdc1-3784-4521-9f4f-e3f3877f8c31/manager/0.log" Feb 23 13:38:26.231777 master-0 kubenswrapper[26474]: I0223 13:38:26.231723 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/frr-metrics/0.log" Feb 23 13:38:26.245175 master-0 kubenswrapper[26474]: I0223 13:38:26.245118 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy/0.log" Feb 23 13:38:26.261377 master-0 kubenswrapper[26474]: I0223 13:38:26.260168 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/kube-rbac-proxy-frr/0.log" Feb 23 13:38:26.273379 master-0 kubenswrapper[26474]: I0223 13:38:26.273194 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-frr-files/0.log" Feb 23 13:38:26.277451 master-0 kubenswrapper[26474]: I0223 13:38:26.277385 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_54b76471-bb9d-45a1-b3be-53e4f013e604/installer/0.log" Feb 23 13:38:26.289663 master-0 kubenswrapper[26474]: I0223 13:38:26.289498 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-h6q84_6bcc1ece-0c36-4593-9134-31adc6f5b6e3/manager/0.log" Feb 23 13:38:26.319258 master-0 kubenswrapper[26474]: I0223 13:38:26.319197 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-reloader/0.log" Feb 23 13:38:26.332315 master-0 kubenswrapper[26474]: I0223 13:38:26.332262 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8mmw5_96d9fb56-251d-4ab0-97b6-63645853e820/cp-metrics/0.log" Feb 23 13:38:26.337408 master-0 kubenswrapper[26474]: I0223 13:38:26.337366 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_27c1e327-cb40-4b36-b371-20d1271b8d8d/installer/0.log" Feb 23 13:38:26.348706 master-0 kubenswrapper[26474]: I0223 13:38:26.348573 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-rmg9d_7d38b74b-692d-400d-9cb0-bdfe09afc08f/frr-k8s-webhook-server/0.log" Feb 23 13:38:26.390309 master-0 kubenswrapper[26474]: I0223 13:38:26.389769 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_75f01779-caef-46f3-ac91-89f32798535b/installer/0.log" Feb 23 13:38:26.401766 master-0 kubenswrapper[26474]: I0223 13:38:26.401720 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-699c7b98cc-68v7n_b482fa9f-094a-41d1-8259-fc9d625f0b65/manager/0.log" Feb 23 13:38:26.418519 master-0 kubenswrapper[26474]: I0223 13:38:26.418437 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8bb78c4ff-kc79t_7fb5eb18-ebfc-4d77-a7f3-29f2ee7a58da/webhook-server/0.log" Feb 23 13:38:26.429237 master-0 kubenswrapper[26474]: I0223 13:38:26.429177 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-2tm5t_2e3f02c0-ac53-4ed1-a735-c46648724b7c/manager/0.log" Feb 23 13:38:26.446632 master-0 kubenswrapper[26474]: I0223 13:38:26.446582 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-g5pv6_130ccb2a-cc03-4cb7-a439-fc8769a64b63/manager/0.log" Feb 23 13:38:26.525137 master-0 kubenswrapper[26474]: I0223 13:38:26.525077 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-z98lb_68e3868d-86e1-4564-84ec-e290e9ac1aa7/manager/0.log" Feb 23 13:38:26.615522 master-0 kubenswrapper[26474]: I0223 13:38:26.612118 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-wwjlb_8f0efba8-d92e-48b3-affc-1f155052edeb/manager/0.log" Feb 23 13:38:26.810210 master-0 kubenswrapper[26474]: I0223 13:38:26.809851 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver/0.log" Feb 23 13:38:26.829420 master-0 kubenswrapper[26474]: I0223 13:38:26.829379 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-vrkfv_bce90f86-6940-4838-8b91-09eccef0ada1/manager/0.log" Feb 23 13:38:26.836649 master-0 kubenswrapper[26474]: I0223 13:38:26.836476 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-cert-syncer/0.log" Feb 23 13:38:26.843321 master-0 kubenswrapper[26474]: I0223 13:38:26.843268 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-k85ff_81d4eac4-5e91-43b6-9dcd-485fd51b32da/manager/0.log" Feb 23 13:38:26.852984 master-0 kubenswrapper[26474]: I0223 13:38:26.852954 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-cert-regeneration-controller/0.log" Feb 23 13:38:26.855558 master-0 kubenswrapper[26474]: I0223 13:38:26.855510 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-579b7786b9jtxq6_e583a795-8ab8-4cb3-ab87-770e147b4fcd/manager/0.log" Feb 23 13:38:26.870298 master-0 kubenswrapper[26474]: I0223 13:38:26.870248 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-insecure-readyz/0.log" Feb 23 13:38:26.895981 master-0 kubenswrapper[26474]: I0223 13:38:26.895932 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-check-endpoints/0.log" Feb 23 13:38:26.913177 master-0 kubenswrapper[26474]: I0223 13:38:26.913139 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/setup/0.log" Feb 23 13:38:26.916964 master-0 kubenswrapper[26474]: I0223 13:38:26.916928 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/speaker/0.log" Feb 23 13:38:26.924514 master-0 kubenswrapper[26474]: I0223 13:38:26.924483 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tms9_cdefb8f6-8066-4ce4-b76c-5b19831081c9/kube-rbac-proxy/0.log" Feb 23 13:38:26.991071 master-0 kubenswrapper[26474]: I0223 13:38:26.990797 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55c649df44-7c98k_89bfef34-b109-4625-accc-069dcc323de6/operator/0.log" Feb 23 13:38:27.798364 master-0 kubenswrapper[26474]: I0223 13:38:27.798247 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5dc486cffc-75wjb_340bb764-ee68-42e8-81da-a6eb1790da92/manager/0.log" Feb 23 13:38:27.826933 master-0 kubenswrapper[26474]: I0223 13:38:27.826867 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ngknx_e79bc2ef-593e-4f36-a1ac-c70064cb331d/registry-server/0.log" Feb 23 13:38:27.888938 master-0 kubenswrapper[26474]: I0223 13:38:27.888877 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-w2ql6_51fe69f3-87d3-4318-87af-cb2cc650c102/manager/0.log" Feb 23 13:38:27.921839 master-0 kubenswrapper[26474]: I0223 13:38:27.921783 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-22lb7_50c65748-710a-45bf-b87c-da417b425d24/manager/0.log" Feb 23 13:38:27.936450 master-0 kubenswrapper[26474]: I0223 13:38:27.936387 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4cbz9_5674fd74-f83f-4fff-8274-02567d473982/operator/0.log" Feb 23 13:38:27.960370 master-0 kubenswrapper[26474]: I0223 13:38:27.959772 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-b474t_a98204b6-f613-417e-a067-f418edda4d8f/manager/0.log" Feb 23 13:38:27.971098 master-0 kubenswrapper[26474]: I0223 13:38:27.971052 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-rbgvn_1b3fd8f4-0323-4cf3-a20c-94ffd694f226/manager/0.log" Feb 23 13:38:27.983363 master-0 kubenswrapper[26474]: I0223 13:38:27.981494 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-nrq67_2522d9b4-e710-4c6c-babe-50b15608f82f/manager/0.log" Feb 23 13:38:28.002411 master-0 kubenswrapper[26474]: I0223 13:38:28.002373 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-vpgcl_62f3d897-cddf-4020-ac9e-fe028bf95c21/manager/0.log" Feb 23 13:38:28.003726 master-0 kubenswrapper[26474]: I0223 13:38:28.003684 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/kube-rbac-proxy/0.log" Feb 23 13:38:28.017852 master-0 kubenswrapper[26474]: I0223 13:38:28.017815 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/manager/1.log" Feb 23 13:38:28.092989 master-0 kubenswrapper[26474]: I0223 13:38:28.092867 26474 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-cqmh7_fce9f67d-0b27-41e3-ba4c-ed9cca25703e/manager/0.log"